an error occurred while opening that folder on the ftp server - linux

I created ftp server by pureftpd on linux sever:
sudo apt-get install pure-ftpd
sudo bash
echo "yes" > /etc/pure-ftpd/conf/Daemonize
echo "yes" > /etc/pure-ftpd/conf/NoAnonymous
echo "yes" > /etc/pure-ftpd/conf/ChrootEveryone
echo "yes" > /etc/pure-ftpd/conf/IPV4Only
echo "no" > /etc/pure-ftpd/conf/ProhibitDotFilesWrite
but when I try to access to ftp from file explorer in Windows 10 by ftp://x.x.x.x with username and password I get this error:
an error occurred while opening that folder on the ftp server
I gave the all permission to root folder,
I add this line to configuration:
echo "10000 60000" > /etc/pure-ftpd/conf/PassivePortRange
sudo systemctl restart pure-ftpd
but still I get the same error. How can I solve this?

Use of other ftp servers have shown the same client-side result. To access certain directories on the server via ftp, there are often multiple requirements. After the client provides a user and password that are valid on the target host:
Various ftp servers often need additional configuration that allows access to specific directories. Sometimes there's a global setting that lists 1+ directories that applies to all client access, eg "/ftp". Another variety requires creating named ftp group(s), specifying 1+ directories accessible to that group, and adding users to 1 or more groups.
Although not always well documented, ftp servers tend to provide logging with any connection or session. Check on the ftp server host for more detailed error information in a place like /var/log/messages. Enabling session or error logging and the log-file location may be additional configuration settings. If there's nothing obvious, file locations can sometimes be discovered with a cmd-line similar to this:
strings /usr/etc/ftp-server | grep /
Also remember to restart your ftp server after config changes. Some network daemons are known to re-read config files after receiving a SIGHUP, eg:
pkill -1 server-name

Related

PsExec - The file cannot be accessed by the system

I'm trying to execute a .bat File on a Server in a local network with psexec
I'm currently trying with this command:
.\PsExec.exe -i -u Administrator \\192.168.4.36 -s -d cmd.exe -c "Z:\NX_SystemSetup\test.bat"
The server has no password (it has no internet connection and is running a clean install of Windows Server 2016), so I'm currently not entering one, and when a password is asked I simply press enter, which seems to work. Also, the .bat File currently only opens notepad on execution.
When I enter this command, I get the message "The file cannot be acessed by the system"
I've tried executing it with powershell with administrator privileges (and also without, since I saw another user on Stackoverflow mention that it only worked for them that way) but to no success.
I'm guessing this is a privilege problem, since it "can't be accessed", which would indicate to me that the file was indeed found.
I used net share in a cmd and it says that C:\ on my server is shared.
The file I'm trying to copy is also not in any kind of restricted folder.
Any ideas what else I could try?
EDIT:
I have done a lot more troubleshooting.
On the Server, I went into the firewall settings and opened TCP Port 135 and 445 explicitly, since according to google, PsExec uses these.
Also on the Server, I opened Properties of the "windows" Folder in C: and added an admin$ share, where I gave everyone all rights to the folder (stupid ik but I'm desperate for this to work)
Also played around a bunch more with different commands. Not even .\PsExec.exe \\192.168.4.36 ipconfig seems to work. I still get the same error. "The file cannot be accessed by the system"
This is honestly maddening. There is no known documentation of this error on the internet. Searching explicitly for "File cannot be accessed" still only brings up results for "File cannot be found" and similar.
I'm surely just missing something obvious. Right?
EDIT 2
I also tried adding the domain name in front of the username. I checked the domain by using set user in cmd on the server.
.\PsExec.exe \\192.168.4.16 -u DomainName\Administrator -p ~ -c "C:\Users\UserName\Documents\Mellanox Update.bat"
-p ~
seems to work for the password, so I added that.
I also tried creating a shortcut of the .bat File, and executing it as Administrator, using it instead of the original .bat File. The error stays the same "The File cannot be accessed by the system"
As additional info, the PC I'm trying to send the command from has Windows 10, the Server is running Windows Server 2016
So, the reason for this specific error is as simple and as stupid as it gets.
Turns out I was using the wrong IP. The IP I was using is an IPMI Address, which does not allow for any traffic (other than IPMI related stuff)
I have not yet gotten it to work yet, since I've run into some different errors, but the original question/problem has been resolved.

pywatchdog and pyinotify not detecting changes on files inside ftp created directories

I have an application monitoring files sent to a FTP server (proftpd 1.3.5a). I am using pywatchdog to monitor file creation on FTP server root (app running locally), but under some very specific circumstance it does not issue a notification: when I create a new dir through ftp and, after that, create a file under this directory. The file creation/modification events are not caught!
In order to reproduce it in a simple way I've used pyinotify (0.9.6) itself and it looks like the problem comes from there. So, a simple way to reproduce the problem:
Install proftpd and pyinotify (python3) on the server with default settings
In the server, run the following command to monitor ftp root (recursive and autoadd turned on - considering user "user"):
python3 -m pyinotify -v -r -a /home/user
In the client, create a sample.txt, connect in the ftp server and issue the following commands, in this order:
mkdir dir_a
cd dir_a
put sample.txt
There will be no events related to sample.txt - neither create nor modify!
I've tried to remove the ftp factor from the issue by manually creating and moving directories inside the observed target and creating files inside these directories, but the issue does not happen - it all works smoothly.
Any help will be appreciated!

ProFTPd support for MLST and MLSD commands

Have another interesting problem. My company recently switched over to ProFTP to handle it's FTP and SFTP needs. We primarily run RHEL 5 servers. Our users are able to login, and transfer files without issue (for the most part anyway :-P).
Ran into a strange issue however with one of our clients, who need to list an individual file (in their FTP session) after performing a file transfer operation. They are able to list an entire directory just fine with 'ls', but when doing so with an exact file name (and/or with a wildcard), the listing fails.
I was able to duplicate the issue on my Windows workstation using ncftp, but NOT on my Linux workstation. After turning on debugging for both clients, as well as enabling full FTP command logging on the server-side, I discovered that the Linux FTP client uses a LIST command whereas ncftp uses an MSLD command.
Linux client:
ftp> debug
Debugging on (debug=1).
ftp> ls file.txt
ftp: setsockopt (ignored): Permission denied
---> PASV
227 Entering passive mode (X.X.X.X).
---> LIST file.txt
150 Opening ASCII mode data connection for file list
-rw-r--r-- 1 0 root 9318400 Aug 28 07:29 file.txt
226 Transfer complete
ncftp (Windows) client:
ncftp / > debug
ncftp / > ls file.txt
> ls file.txt
Cmd: PASV
227: Entering passive mode (X.X.X.X).
Cmd: MLSD file.txt
550: 'file.txt is not a directory
List failed.
From what I've been able to gather so far, MLSD and MLST are the extended versions of the traditional FTP LIST command(s). But when listing an individual file, shouldn't the client be issuing the server a MLST command instead of a MLSD command? MLSD should be used to list entire directories from what I've read so far.
I also connected to our old FTP server (running VSFTP) with multiple clients in debug mode (including ncftp), and confirmed that they were ALL using the older LIST command for everything, and it worked perfectly. Whether this was because it was enforced on the server-side, or just by coincidence, I do not know.
I've also read that mod_facts needs to be enabled for MLSD/MLST to work. I've confirmed that my proftpd version supports it, and that it's enabled on the server:
[root#server ~]# proftpd -v
ProFTPD Version 1.3.5
From proftpd.conf:
# Adding support for extended FTP listing commands (e.g. MLST, MLSD, etc)
LoadModule mod_facts.c
<IfModule mod_facts.c>
FactsAdvertise off
</IfModule>
I've also tried toggling FactsAdvertise of and off, reloading the service as I do so, and the ncftp client STILL wants to do an MLSD of the individual file!
So my two basic questions are:
How can I get proftpd to play nice with MLSD/MLST commands, and if
that's too much hassle . .
How do I enforce FTP clients connecting to the ProFTP server to use
the traditional LIST command(s), as was evidently the case with our
old FTP service (VSFTP).
Thanks in advance!
There have been other reports that ncftp(1) does not implement MLSD properly. Specifically, per RFC specification, the MLSD command is only supposed to be used on directories, not on files. Second, the "FactsAdvertise off" tells mod_facts to NOT include "MLSD" in the FEAT response; conformant clients are supposed to use the FEAT response to determine whether the server does indeed handle the MLSD/MLST commands. ncftp(1) appears to ignore the FEAT response on this regard.
Given that your mod_facts module is a shared module, then, all you need to do is omit the "LoadModule mod_facts.c" module from your proftpd.conf. Then proftpd will not support MLSD/MLST, and ncftp(1) will fallback to using LIST.
Hope this helps!
My apologies, I forgot I had this still open. We found a fix for this on the ProFTP fourms:
https://forums.proftpd.org/smf/index.php?topic=11604.0

403 Forbidden trying to access folder on browser

I have a folder in ~/Documents/WebD/ named Tarea which have a public_html folder inside, to access it I have tried creating a virtual host in a thousand ways but it didn't work, now I'm trying to get there creating a Symlink from tarea to /var/www/html/tarea, and accessing via localhost/tarea/public_html but y get
Forbidden
You don't have permission to access /tarea on this server.
Apache/2.2.15 (CentOS) Server at localhost Port 80
I tried a lot of different ways named on forums, changing httpd.conf, give permissions to apache, etc, but non of them worked
Any suggestion?
It could be SELinux preventing Apache from accessing those files. I would try switching SELinux into permissive mode and seeing if your permissions open up. You can read more about SELinux and Apache here.
To put SELinux into permissive mode, do:
echo 0 > /selinux/enforce
To put SELinux back into enforcing mode, do:
echo 1 > /selinux/enforce
Hope you have already checked Persmissions for /tarea folder. The User which is running the Apache server should have read/write permissions on the specific directories.

Setting up FTP on Amazon Cloud Server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I am trying to set up FTP on Amazon Cloud Server, but without luck.
I search over net and there is no concrete steps how to do it.
I found those commands to run:
$ yum install vsftpd
$ ec2-authorize default -p 20-21
$ ec2-authorize default -p 1024-1048
$ vi /etc/vsftpd/vsftpd.conf
#<em>---Add following lines at the end of file---</em>
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance>
$ /etc/init.d/vsftpd restart
But I don't know where to write them.
Jaminto did a great job of answering the question, but I recently went through the process myself and wanted to expand on Jaminto's answer.
I'm assuming that you already have an EC2 instance created and have associated an Elastic IP Address to it.
Step #1: Install vsftpd
SSH to your EC2 server. Type:
> sudo yum install vsftpd
This should install vsftpd.
Step #2: Open up the FTP ports on your EC2 instance
Next, you'll need to open up the FTP ports on your EC2 server. Log in to the AWS EC2 Management Console and select Security Groups from the navigation tree on the left. Select the security group assigned to your EC2 instance. Then select the Inbound tab, then click Edit:
Add two Custom TCP Rules with port ranges 20-21 and 1024-1048. For Source, you can select 'Anywhere'. If you decide to set Source to your own IP address, be aware that your IP address might change if it is being assigned via DHCP.
Step #3: Make updates to the vsftpd.conf file
Edit your vsftpd conf file by typing:
> sudo vi /etc/vsftpd/vsftpd.conf
Disable anonymous FTP by changing this line:
anonymous_enable=YES
to
anonymous_enable=NO
Then add the following lines to the bottom of the vsftpd.conf file:
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance>
Your vsftpd.conf file should look something like the following - except make sure to replace the pasv_address with your public facing IP address:
To save changes, press escape, then type :wq, then hit enter.
Step #4: Restart vsftpd
Restart vsftpd by typing:
> sudo /etc/init.d/vsftpd restart
You should see a message that looks like:
If this doesn't work, try:
> sudo /sbin/service vsftpd restart
Step #5: Create an FTP user
If you take a peek at /etc/vsftpd/user_list, you'll see the following:
# vsftpd userlist
# If userlist_deny=NO, only allow users in this file
# If userlist_deny=YES (default), never allow users in this file, and
# do not even prompt for a password.
# Note that the default vsftpd pam config also checks /etc/vsftpd/ftpusers
# for users that are denied.
root
bin
daemon
adm
lp
sync
shutdown
halt
mail
news
uucp
operator
games
nobody
This is basically saying, "Don't allow these users FTP access." vsftpd will allow FTP access to any user not on this list.
So, in order to create a new FTP account, you may need to create a new user on your server. (Or, if you already have a user account that's not listed in /etc/vsftpd/user_list, you can skip to the next step.)
Creating a new user on an EC2 instance is pretty simple. For example, to create the user 'bret', type:
> sudo adduser bret
> sudo passwd bret
Here's what it will look like:
Step #6: Restricting users to their home directories
At this point, your FTP users are not restricted to their home directories. That's not very secure, but we can fix it pretty easily.
Edit your vsftpd conf file again by typing:
> sudo vi /etc/vsftpd/vsftpd.conf
Un-comment out the line:
chroot_local_user=YES
It should look like this once you're done:
Restart the vsftpd server again like so:
> sudo /etc/init.d/vsftpd restart
All done!
Appendix A: Surviving a reboot
vsftpd doesn't automatically start when your server boots. If you're like me, that means that after rebooting your EC2 instance, you'll feel a moment of terror when FTP seems to be broken - but in reality, it's just not running!. Here's a handy way to fix that:
> sudo chkconfig --level 345 vsftpd on
Alternatively, if you are using redhat, another way to manage your services is by using this nifty graphic user interface to control which services should automatically start:
> sudo ntsysv
Now vsftpd will automatically start up when your server boots up.
Appendix B: Changing a user's FTP home directory
* NOTE: Iman Sedighi has posted a more elegant solution for restricting users access to a specific directory. Please refer to his excellent solution posted as an answer *
You might want to create a user and restrict their FTP access to a specific folder, such as /var/www. In order to do this, you'll need to change the user's default home directory:
> sudo usermod -d /var/www/ username
In this specific example, it's typical to give the user permissions to the 'www' group, which is often associated with the /var/www folder:
> sudo usermod -a -G www username
To enable passive ftp on an EC2 server, you need to configure the ports that your ftp server should use for inbound connections, then open a list of available ports for the ftp client data connections.
I'm not that familiar with linux, but the commands you posted are the steps to install the ftp server, configure the ec2 firewall rules (through the AWS API), then configure the ftp server to use the ports you allowed on the ec2 firewall.
So this step installs the ftp client (VSFTP)
> yum install vsftpd
These steps configure the ftp client
> vi /etc/vsftpd/vsftpd.conf
-- Add following lines at the end of file --
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance>
> /etc/init.d/vsftpd restart
but the other two steps are easier done through the amazon console under EC2 Security groups. There you need to configure the security group that is assigned to your server to allow connections on ports 20,21, and 1024-1048
Thanks #clone45 for the nice solution. But I had just one important problem with Appendix b of his solution. Immediately after I changed the home directory to var/www/html then I couldn't connect to server through ssh and sftp because it always shows following errors
permission denied (public key)
or in FileZilla I received this error:
No supported authentication methods available (server: public key)
But I could access the server through normal FTP connection.
If you encountered to the same error then just undo the appendix b of #clone45 solution by set the default home directory for the user:
sudo usermod -d /home/username/ username
But when you set user's default home directory then the user have access to many other folders outside /var/www/http. So to secure your server then follow these steps:
1- Make sftponly group
Make a group for all users you want to restrict their access to only ftp and sftp access to var/www/html. to make the group:
sudo groupadd sftponly
2- Jail the chroot
To restrict access of this group to the server via sftp you must jail the chroot to not to let group's users to access any folder except html folder inside its home directory. to do this open /etc/ssh/sshd.config in the vim with sudo.
At the end of the file please comment this line:
Subsystem sftp /usr/libexec/openssh/sftp-server
And then add this line below that:
Subsystem sftp internal-sftp
So we replaced subsystem with internal-sftp. Then add following lines below it:
Match Group sftponly
ChrootDirectory /var/www
ForceCommand internal-sftp
AllowTcpForwarding no
After adding this line I saved my changes and then restart ssh service by:
sudo service sshd restart
3- Add the user to sftponly group
Any user you want to restrict their access must be a member of sftponly group. Therefore we join it to sftponly by:
sudo usermod -G sftponly username
4- Restrict user access to just var/www/html
To restrict user access to just var/www/html folder we need to make a directory in the home directory (with name of 'html') of that user and then mount /var/www to /home/username/html as follow:
sudo mkdir /home/username/html
sudo mount --bind /var/www /home/username/html
5- Set write access
If the user needs write access to /var/www/html, then you must jail the user at /var/www which must have root:root ownership and permissions of 755. You then need to give /var/www/html ownership of root:sftponly and permissions of 775 by adding following lines:
sudo chmod 755 /var/www
sudo chown root:root /var/www
sudo chmod 775 /var/www/html
sudo chown root:www /var/www/html
6- Block shell access
If you want restrict access to not access to shell to make it more secure then just change the default shell to bin/false as follow:
sudo usermod -s /bin/false username
Great Article... worked like a breeze on Amazon Linux AMI.
Two more useful commands:
To change the default FTP upload folder
Step 1:
edit /etc/vsftpd/vsftpd.conf
Step 2: Create a new entry at the bottom of the page:
local_root=/var/www/html
To apply read, write, delete permission to the files under folder so that you can manage using a FTP device
find /var/www/html -type d -exec chmod 777 {} \;
In case you have ufw enabled, remember add ftp:
> sudo ufw allow ftp
It took me 2 days to realise that I enabled ufw.
It will not be ok until you add your user to the group www by the following commands:
sudo usermod -a -G www <USER>
This solves the permission problem.
Set the default path by adding this:
local_root=/var/www/html
Don't forget to update your iptables firewall if you have one to allow the 20-21 and 1024-1048 ranges in.
Do this from /etc/sysconfig/iptables
Adding lines like this:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 20:21 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 1024:1048 -j ACCEPT
And restart iptables with the command:
sudo service iptables restart
I've simplified clone45 steps:
Open the ports as he mentioned
sudo su
sudo yum install vsftpd
echo -n "Public IP of your instance: " && read publicip
echo -e "anonymous_enable=NO\npasv_enable=YES\npasv_min_port=1024\npasv_max_port=1048\npasv_address=$publicip\nchroot_local_user=YES" >> /etc/vsftpd/vsftpd.conf
sudo /etc/init.d/vsftpd restart
I followed clone45's answer all the way to the end. A great article! Since I needed the FTP access to install plug-ins to one of my wordpress sites, I changed the home directory to /var/www/mysitename. Then I continued to add my ftp user to the apache(or www) group like this:
sudo usermod -a -G apache myftpuser
After this I still saw this error on WP's plugin installation page: "Unable to locate WordPress Content directory (wp-content)". Searched and found this solution on a wp.org Q&A session: https://wordpress.org/support/topic/unable-to-locate-wordpress-content-directory-wp-content and added the following to the end of wp-config.php:
if(is_admin()) {
add_filter('filesystem_method', create_function('$a', 'return "direct";' ));
define( 'FS_CHMOD_DIR', 0751 );
}
After this my WP plugin was installed successfully.
maybe worth mentioning in addition to clone45's answer:
Fixing Write Permissions for Chrooted FTP Users in vsftpd
The vsftpd version that comes with Ubuntu 12.04 Precise does not
permit chrooted local users to write by default. By default you will
have this in /etc/vsftpd.conf:
chroot_local_user=YES
write_enable=YES
In order to allow local users to write, you need to add the following parameter:
allow_writeable_chroot=YES
Note:
Issues with write permissions may show up as following FileZilla errors:
Error: GnuTLS error -15: An unexpected TLS packet was received.
Error: Could not connect to server
References:
Fixing Write Permissions for Chrooted FTP Users in vsftpd
VSFTPd stopped working after update
In case you are getting 530 password incorrect
1 more step needed
in file /etc/shells
Add the following line
/bin/false
FileZila is good FTP tool to setup with Amazon Cloud.
Download FileZila client from https://filezilla-project.org/
Click on File -> Site Manager - >
New Site
Provide Host Name IP address of your amazon cloud location (Port if any)
Protocol - SFTP (May change based on your requirement)
Login Type - Normal (So system will not ask for password each time)
Provide user name and password.
Connect.
You need to do these step only 1 time, later it will upload content to the same IP address and same site.

Resources