ProFTPd support for MLST and MLSD commands - linux

Have another interesting problem. My company recently switched over to ProFTP to handle it's FTP and SFTP needs. We primarily run RHEL 5 servers. Our users are able to login, and transfer files without issue (for the most part anyway :-P).
Ran into a strange issue however with one of our clients, who need to list an individual file (in their FTP session) after performing a file transfer operation. They are able to list an entire directory just fine with 'ls', but when doing so with an exact file name (and/or with a wildcard), the listing fails.
I was able to duplicate the issue on my Windows workstation using ncftp, but NOT on my Linux workstation. After turning on debugging for both clients, as well as enabling full FTP command logging on the server-side, I discovered that the Linux FTP client uses a LIST command whereas ncftp uses an MSLD command.
Linux client:
ftp> debug
Debugging on (debug=1).
ftp> ls file.txt
ftp: setsockopt (ignored): Permission denied
---> PASV
227 Entering passive mode (X.X.X.X).
---> LIST file.txt
150 Opening ASCII mode data connection for file list
-rw-r--r-- 1 0 root 9318400 Aug 28 07:29 file.txt
226 Transfer complete
ncftp (Windows) client:
ncftp / > debug
ncftp / > ls file.txt
> ls file.txt
Cmd: PASV
227: Entering passive mode (X.X.X.X).
Cmd: MLSD file.txt
550: 'file.txt is not a directory
List failed.
From what I've been able to gather so far, MLSD and MLST are the extended versions of the traditional FTP LIST command(s). But when listing an individual file, shouldn't the client be issuing the server a MLST command instead of a MLSD command? MLSD should be used to list entire directories from what I've read so far.
I also connected to our old FTP server (running VSFTP) with multiple clients in debug mode (including ncftp), and confirmed that they were ALL using the older LIST command for everything, and it worked perfectly. Whether this was because it was enforced on the server-side, or just by coincidence, I do not know.
I've also read that mod_facts needs to be enabled for MLSD/MLST to work. I've confirmed that my proftpd version supports it, and that it's enabled on the server:
[root#server ~]# proftpd -v
ProFTPD Version 1.3.5
From proftpd.conf:
# Adding support for extended FTP listing commands (e.g. MLST, MLSD, etc)
LoadModule mod_facts.c
<IfModule mod_facts.c>
FactsAdvertise off
</IfModule>
I've also tried toggling FactsAdvertise of and off, reloading the service as I do so, and the ncftp client STILL wants to do an MLSD of the individual file!
So my two basic questions are:
How can I get proftpd to play nice with MLSD/MLST commands, and if
that's too much hassle . .
How do I enforce FTP clients connecting to the ProFTP server to use
the traditional LIST command(s), as was evidently the case with our
old FTP service (VSFTP).
Thanks in advance!

There have been other reports that ncftp(1) does not implement MLSD properly. Specifically, per RFC specification, the MLSD command is only supposed to be used on directories, not on files. Second, the "FactsAdvertise off" tells mod_facts to NOT include "MLSD" in the FEAT response; conformant clients are supposed to use the FEAT response to determine whether the server does indeed handle the MLSD/MLST commands. ncftp(1) appears to ignore the FEAT response on this regard.
Given that your mod_facts module is a shared module, then, all you need to do is omit the "LoadModule mod_facts.c" module from your proftpd.conf. Then proftpd will not support MLSD/MLST, and ncftp(1) will fallback to using LIST.
Hope this helps!

My apologies, I forgot I had this still open. We found a fix for this on the ProFTP fourms:
https://forums.proftpd.org/smf/index.php?topic=11604.0

Related

an error occurred while opening that folder on the ftp server

I created ftp server by pureftpd on linux sever:
sudo apt-get install pure-ftpd
sudo bash
echo "yes" > /etc/pure-ftpd/conf/Daemonize
echo "yes" > /etc/pure-ftpd/conf/NoAnonymous
echo "yes" > /etc/pure-ftpd/conf/ChrootEveryone
echo "yes" > /etc/pure-ftpd/conf/IPV4Only
echo "no" > /etc/pure-ftpd/conf/ProhibitDotFilesWrite
but when I try to access to ftp from file explorer in Windows 10 by ftp://x.x.x.x with username and password I get this error:
an error occurred while opening that folder on the ftp server
I gave the all permission to root folder,
I add this line to configuration:
echo "10000 60000" > /etc/pure-ftpd/conf/PassivePortRange
sudo systemctl restart pure-ftpd
but still I get the same error. How can I solve this?
Use of other ftp servers have shown the same client-side result. To access certain directories on the server via ftp, there are often multiple requirements. After the client provides a user and password that are valid on the target host:
Various ftp servers often need additional configuration that allows access to specific directories. Sometimes there's a global setting that lists 1+ directories that applies to all client access, eg "/ftp". Another variety requires creating named ftp group(s), specifying 1+ directories accessible to that group, and adding users to 1 or more groups.
Although not always well documented, ftp servers tend to provide logging with any connection or session. Check on the ftp server host for more detailed error information in a place like /var/log/messages. Enabling session or error logging and the log-file location may be additional configuration settings. If there's nothing obvious, file locations can sometimes be discovered with a cmd-line similar to this:
strings /usr/etc/ftp-server | grep /
Also remember to restart your ftp server after config changes. Some network daemons are known to re-read config files after receiving a SIGHUP, eg:
pkill -1 server-name

Ag command on local against remote files

I installed ag on localhost and then localhost has access to remote host, I don't want to install ag on remote host, instead I'd like to use ag from localhost to run against files in remote host.
I'm thinking some kind of proxy ag command or run ag command on ssh but still prefer 'stable` and permenant solution.
Is that possible?
The short answer is no, there's no way to run ag or for that matter grep, ack, etc. with some built-in proxy mode on a remote machine. And, this makes sense if you consider that these utilities are designed to scan the contents of large sets of files -- going over the network would be too slow.
There are other options: (1) copy the files you want to search to localhost and search them or (2) mount a remote drive on remote host. The first option is more common; and, there are a suite of unix tools to facilitate copying and syncing files between two machines. The classic one being rsync: e.g. rsync -a -essh . username#remotehost:~/remoteDirectoryToSync. (3) While the may seem obvious, the final option is just using the tools on the remote machine. Almost every linux/Unix box has grep or find, and learning and refining one's skill with these default tools pays dividends.

Can you help me access Mac SMB share from Ubuntu using smbclient? (NT_STATUS_ACCESS_DENIED error)

I've been working on a file server product that uses smbcilent to transfer files between client computers and the server. It's been working great so far with our LAMP (Ubuntu) server and Windows machines.
I'm currently trying to expand the setup to include Mac's, but am having trouble with the server accessing the share on the Mac.
Here's my command and error (bracketed descriptions replace private info):
# smbclient //10.101.0.7/[share-file] -U [username]%[password] -c ls
WARNING: The "syslog" option is deprecated
NTLMSSP packet check failed due to short signature (0 bytes)!
NTLMSSP NTLM2 packet check failed due to invalid signature!
session setup failed: NT_STATUS_ACCESS_DENIED
Things I've tried:
✓ Accessing share using a Windows machine to ensure the share is setup properly - check! Works fine there.
✓ Invoking -S off or --signing=off in the command - no change.
✓ Just looking at the shares first using smbclient -L 10.101.0.7 -U [username]%[password] - same error.
✓ Googling for an answer - check! Several people with similar problems, but no working solutions so far.
The most promising thing I've see so far involves compiling smbclient 4.4 from sources and running that with no authentication (-U ""%""), but that seems like a temporary solution based on a bug rather than a solid plan that will work for a long time. (But I'll try that next if I can't find any better ideas...)
Thanks for reading and trying to help!
Try adding --option="ntlmssp_client:force_old_spnego = yes" to the smbclient command as suggested on the samba-technical mailing list.
For me, this now lists shares on a Mac OSX server:
smbclient -U$user%$password -L $mac_host --option="ntlmssp_client:force_old_spnego = yes"
For mounting, you may need to add the nounix,sec=ntlmssp options as in
sudo mount -t cifs //$mac_host/$share $mountpoint -o nounix,sec=ntlmssp,username=$user,password=$password
On recent versions of MacOS (e.g. Monterey) it is necessary to do several configuration steps to enable smb access from Linux:
Open System Preferences.
Select Sharing.
Select File Sharing.
Ensure that the directory is listed in Shared Folders.
Right-click/two-finger click on the share directory.
Click on Advanced Options
Ensure Only allow SMB encrypted connections is checked.
Click OK
Click on Options
Click on the checkbox for Share files and folders using SMB.
Under Windows File Sharing ensure the appropriate user is checked.
Type the user's password in the 'Authenticate' dialog bo and press 'OK'.
Click 'Done'.
You should now be able to connect from Linux to the MacOS share using the commands given by #mivk.

Sending .csv file from linux to windows

I want to send files (txt or csv) from linux to windows.
I already have a script to get information and put it into a .txt or .csv file, tried with many ways to send this file from linux to my computer.
there is a ping from server to my computer IP, but when i use below commands it gives:
ssh: connect to host 10.10.X.X port 22: Connection timed out
scp -r fname.lname#10.10.X.X:/home/ test.txt
or
scp test.txt fname.lname#10.10.X.X:/C:/Data
Please could you help, simply I wanna have a copy of file (that I have it in server) in my computer, to use it.
there is some similar questions with no answer here.
You need a ssh server installed on windows. Windows does not currently have out of the box ssh server. They are thinking of implementing OOB ssh servers in future releases of windows 10.
Have a look at this link https://winscp.net/eng/docs/guide_windows_openssh_server
Also, if the file transfer that you want is a one time transfer, you can use putty with a reverse scp to retrieve the file or you can use WINscp ( https://winscp.net/eng/download.php )
I usually use the command 'nc' for file transmission.
But since on Windows you have to install a cygwin to use nc, so I think the simplest solution may be like the following.
On linux, go to the directory of those files, and then type:
python -m SimpleHTTPServer 1234
Then on windows you can visit 10.10.X.X:1234 in your browser, and download those files.
Note that 1234 can be replaced by any other port which is not currently used on linux.

ftp: Name or Service not known

in command line
> ftp ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/data/
Work on one computer but does not work on my other one. Error returned
ftp: ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/data/: Name or service not known
I also tried the raw IP address which is
> ftp ftp://130.14.250.10/1000genomes/ftp/data/
But it didn't work.
What is the problem here? how can I fix this?
The ftp command accepts the server name, not a URL. Your session likely should look like:
ftp ftp-trace.ncbi.nih.gov
(Server asks for login and password)
cd /1000genomes/ftp/data/
mget *
This depends on the ftp client you are using. On Mac OSX (ftp client from BSD), for example, the default command line ftp client accepts the full url, while for example in CentOS the default client doesn't, and you need to connect just to the hostname. So, it depends on the flavor of linux and the installed default ftp client.
Default ftp client in CentOS (ARPANET):
ftp ftp-trace.ncbi.nih.gov
cd 1000genomes/ftp/data
If you want to use the full url in CentOS 5.9 or Fedora 18 (where I tested it), you could install an additional ftp client. For example ncftp and lftp have the behavior you are looking for.
ncftp, available through yum or your favorite package manager:
ncftp ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/data/
NcFTP 3.2.2 (Aug 18, 2008) by Mike Gleason (http://www.NcFTP.com/contact/).
Connecting to ...
...
Logged in to ftp-trace.ncbi.nih.gov.
Current remote directory is /1000genomes/ftp/data
lftp, also available through your favorite package manager:
lftp ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/data/
cd ok, cwd=/1000genomes/ftp/data
lftp ftp-trace.ncbi.nih.gov:/1000genomes/ftp/data>
Another, more efficient, way to retrieve a page, is using wget or curl. These work for http, ftp and other protocols.
It looks to me like the computer that isn't working is already adding the ftp: to the URL, have you tried removing it from yours and seeing if that works?
> ftp ftp-trace.ncbi.nih.gov/1000genomes/ftp/data

Resources