WinSCP: The requested name is valid, but no data of the requested type was found. Connection failed - winscp

I'm supposed to access a server, but when I use WinSCP with FTP protocol to log in, I just get a warning that
The requested name is valid, but no data of the requested type was found.
Connection failed.
I really have very little experience with working remotely on servers, or even logging into them. What are my alternatives?

This is the WSANO_DATA. error Quoting Microsoft documentation:
The usual example for this is a host name-to-address translation attempt ... which uses the DNS (Domain Name Server). An MX record is returned but no A record—indicating the host itself exists, but is not directly reachable.
(This can possibly happen for newly registered domain names that are no fully setup yet.)
See:
https://learn.microsoft.com/en-us/windows/win32/winsock/windows-sockets-error-codes-2#WSANO_DATA or
https://winscp.net/eng/docs/message_name_no_data
It could have been a temporary issue. Also make sure you specify your hostname without the leading ftp:// (though the latest version of WinSCP will strip it automatically).

You can find a very nice discussion on the same issue with WinSCP here
You can also try FileZilla or Putty

If you are typing your address like ftp://ftp.domain.com or things like that, remove the first part and just keep ftp.domain.com in your host address box.

You might want to consider PuTTY, which comes with a number of tools including a ssh client and a secure copy tool like WinSCP called pscp. Possibly even more valuable is the psftp client, which allows secure ftp to remote servers. PuTTY can be run from a usb drive, making it easy to carry with you to any computer, allowing you to remote into your server from all over the world.

You're probably using WinSCP to send or get files from/to the server, right? You might want to state that in your question. For that, you're probably better off with FileZilla. (You need the FileZilla client, not the Server)

Related

How to securely host file on RHEL server and enable download for user

I have programmed an application that users can use to process genome data. This application relies on a 10GB database file, that users have to download in order to run the application. At the moment, I have stored this file on Google Drive, but the download bandwith is limited, so if a number of users download the file on a certain day, it will not work for others and they will get errors running the application.
My solution would be to host the file on our research server, create a user that only has access rights to this folder and nothing else, and make the file downloadable from the server via scp within the application (which is open source) through that user.
My question now is, is this safe to do or are people potentially able to hack into our server? If this method would be a security risk, what would be a better way to provide this file?
Thank you in advance!
Aloha
You can setup something like free Seafile https://www.seafile.com/en/home/, or ask the admin to set it up for you which is pretty secure like a self hosted google drive with 2fa authentication.
Another nice and easy tool is Filebrowser on github (https://github.com/filebrowser/filebrowser)
I would not really advice giving people shell/scp access inside your network.
And hosting anything inside a company network is in general not wisest idea, there is a always a risk involved.
I would setup a Seafile/filebrowser solution at a cheap rented server outside your network and upload it there. Or if you have a small pc left set it up in a DMZ Zone, a zone that has special access restrictions inside your company.
You want to use SSH (scp) as a transportation and authentication method for file hosting. It's possible to keep this safe with caution. For example, GitHub uses SSH for transport when providing git access with the git+ssh protocol.
Now for the caution part, if you haven't done it before, it's not a trivial task.
The proper way to achieve this would be set up an isolated SSH server in a chroot environment, and set up an SSH user on this isolated SSH instance only (not a user in the system that is added by eg useradd). Then you can add the files that's absolutely necessary to the chroot, and provide SSH access to users.
(Nowadays you might want to consider using Linux filesystem namespaces, if applicable, to replace chroot, but I'm not sure on this.)
As for other options, setting up a simple Nginx server for static file hosting might be a lot easier, provided you have some understanding of HTTP and TLS. There're lots of writings on the Internet about this.
Both ways, if you are to expose your server to the Internet or Intranet, you need to make sure of firewalling. Consider to learn about nftables or firewalld or the like, if you haven't already.
SSH is reasonably safe. Always keep software up-to-date.
Set up an sftp-only user with chrooted directory. In /etc/ssh/sshd_config:
Match User MyUser
ChrootDirectory /var/ssh/chroot
ForceCommand internal-sftp
AllowTcpForwarding no
PermitTunnel no
X11Forwarding no
This user will not get a shell (because of internal-sftp), and cannot see files outside of /var/ssh/chroot.
Use a certificate client-side, additional to password.
Good description of the setup process for certificates:
https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server
Your solution is moderately safe.
A better solution is to put it on a server accessible via sftp, behind a password, but also encrypt the file: in this way you introduce a double layer of protection.
On a Linux server you should be able to use a tool like gpg to encrypt your file.
Next you share the decryption key with your partners using a secure channel with e.g. an end2end encrypted messaging software.

Is there any jsch ChannelSftp's function work like command 'cp'

These days,I am work with jsch-0.1.41,operate resources on a remote linux server via ChannelSftp.I find that there is no function provide the functionality similar to shell command "cp".Now I want to copy a file from a directory to the other,these two directory both remote directory on linux server.
Any wrong point in my presentation,please point it out.Thanks.
The SFTP protocol doesn't offer such a command, and thus also JSch's ChannelSftp doesn't offer it.
You have basically two choices:
Use a combination of get and put, i.e. download the file and upload it again. You can do this without local storage (simply connect one of the streams to the other), but this still requires moving the data twice through the network (and encrypting/decrypting twice), where it wouldn't be really necessary. Use this only if the other way doesn't work.
Don't use SFTP, but use an exec channel to execute a copy command on the server. On unix servers, this command is usually named cp, on Windows servers likely copy. (This will not work if the server's administrator somehow limited your account to SFTP-only access.)

Secure, Private, Local Gitorious

I want to have a local Gitorious installation that cannot be accessed outside of my local network, and is as secure and private as possible. The repos will be holding code I need kept private and secure in case of hacking or theft.
I'm not an expert with Linux, and certainly not an expert with git/gitorious, so any tips for improving my installation described below would be most helpful!
I have:
Installed Gitorious on a local machine running Ubuntu Server 11.04 64-bit, with an encrypted LVM.
Used this guide for Gitorious installation, if anyone is curious.
Modified Gitorious to support local IPs as hostnames.
In gitorious.yml:
host fields are a local IP (e.g. 192.168.xxx.xxx)
public_mode: false
only_site_admins_can_create_profiles: true
hide_http_clone_urls: true
git-daemon was installed, but is now removed.
No ports forwarded by internet facing router to machine.
Both git:// based and http:// based requests would normally allow open cloning of repos. Removing git-daemon and setting hide_http_clone_urls to false seems to have disabled both. They both deliver errors now when I attempt to clone.
With an encrypted LVM the machine is secure in case of physical theft. Also, all cloned repos on other machines are kept on encrypted drives as well. I used a custom script on the encrypted LVM that fills the harddrive with porn in case of too many failed attempts.
My current concerns:
Is repo access through git:// and http:// fully disabled?
Are all avenues of repo access secured behind ssh now?
Is there a way to block all requests to the machine that don't originate from within the local network, in case my router gets angry and seeks revenge against me?
Anything more I can do to encrypt or protect the repos in case something goes wrong?
How do I backup gitorious's data? Just backup the MySQL database and repos directory?
Thank you.
If your git-daemon is not running then no git:// access.
hide_http_clone_urls does not disable http, it just does not show the link. To protect it from unauthorized access, you might want to block on apache/nginx all access to git.yourdomain.com.
You can take a look at my debian package, that have many default configurations, better then the documentations available on the internet:
https://gitorious.org/gitorious-for-debian/gitorious/
the base folder is where all configurations is stored, like apache configs and others, there is also the shell scripts that make default users and other things, just explore the source tree.
being more specific about the apache config, take a look here: https://gitorious.org/gitorious-for-debian/gitorious/blobs/master/base/debian/etc/apache2/sites-available/gitorious
If, for example, you don't add the git.yourserver.com alias, then no one should be able to git clone from http.
You might also want to watch and support the private repositories feature that are planned, which will provide real, safe, control of who can see what.
Also for the question about ssh, I can say that, yes, it's safe and will only give access to who have a public key registered on your gitorious installation.
About the requests question, you could take a look at apache allow, deny rules, where you can create something like:
Deny from All
Allow from 192.168.0
For backup, you have to backup your repository folder and mysql databases.

FTP configuration for WordPress

I've installed a WordPress instance on a Linux server, and I need to give it FTP access in order to install plugins and execute automatic backup/restores. I've just installed vsftpd, and started the service, but now what?
How do I figure out/set what the username/pass is?
Should I allow anonymous access?
Is the hostname just 'localhost'?
Any advice would be appreciated. I've never messed with FTP on linux before. Thanks-
Your question is a little unclear because you don't specify what aspect of wordpress "wants" FTP access. If you got WP installed, you clearly have at least some access to the machine already. That said, I'll try to answer around that inclarity.
Your questions in order, then some general thoughts:
How do I figure out/set what the username/pass is?
Remember that the man page for a program is a good first stop. A good man page will also contain a FILES or "SEE ALSO" section near the bottom that will point you to relevant config files.
In this case, "man vsftpd" mentions /etc/vsftpd.conf, so you can then do "man vsftpd.conf" to get info on how to configure it.
VSFTPD is configurable, and can allow users to log in in several ways. In the man page, check out "guest_enable" and "guest_username", "local_enable" and "user_sub_token".
*The easiest route for your single user usage is probably configuring local_enable, then your username and password would be whatever it is in /etc/password.*
Should I allow anonymous access?
No. Since you're using this to admin your Wordpress, there's no reason anyone else should be using this FTP. VSFTPD has this off by default.
Is the hostname just 'localhost'?
Depends where you're coming from. 'localhost' maps back to the loopback, or the same physical machine you're on. So if you need to put ftp configuration information for Server A into a wordpress configuration file on Server A, then 'localhost' is perfectly acceptable. If you're trying to configure the pasv_addr_resolve/pasv_addr flag of VSFTPD, then no, you'll want to either pass in the fully qualified name of Server A (serverA.mydomain.com), or leave it off an rely on the IP address.
EDIT: I actually forgot the critical disclaimer to never send credentials over plain FTP. Plain old FTP (meaning not SFTP) sends your username and password in cleartext. I didn't install VSFTP and play with it, but you'll want to make sure that there is some form of encryption happening when you connect. Try hitting it with WinSCP (from windows) or sftp (from linux) to make sure you're getting an ecrypted SFTP, rather than plaintext FTP.
Apologies if you already knew that ;)
You would probably get better answers on server fault.
That said:
vsftp should use your local users by default, and drop you in that user's home directory on login.
disable anonymous access if you don't need it, I don't think wordpress will care but your server will be safer.
yes, or 127.0.0.1, or your public IP if you think you might split the front and back end some day.
WordPress does not natively support SFTP. You can get around this two ways:
chmod permissions in the appropriate directories to allow the normal, automatic update to work correctly. This is the approach most certain to work, as long as it doesn't trip over any local security policies.
Try hacking it in yourself. There have been any number of threads on this at the WordPress.org forums. Here is a recent one which is also talking about non-standard ports. Here is an article about how to try to get it working on Debian Lenny (which also addresses the non-standard port issue).

How to manage a DNS server remotely?

I whant to make a web interface on a server that will manage a few dns servers on another servers.
How can i remotely manage a bind dns server programmaticly ?
I would like to add/edit/delete zones.
I see that there is rndc but that only allows reloading of zones and not adding/deleting.
I could NFS mount zones from dns servers and edit them but is there a better way?
If there isn't a hard requirement on writing something like this from scratch, why not simply use an already existing interface without having to reinvent the wheel? A simple Google search for the keywords bind dns web interface yields an entire list of good open source projects in the very first result link.
There is work at the IETF to define a standard for remote control of name servers based on the Netconf framework. See:
https://datatracker.ietf.org/doc/html/draft-ietf-dnsop-name-server-management-reqs
https://datatracker.ietf.org/doc/html/draft-dickinson-dnsop-nameserver-control-00.txt
The requirements include the ability to add/remove zones, etc.
You could set up something that does remote SSH commands? That may be a bit insecure, though, unless the server running the commands themselves is pre-authenticated with an SSH key, and that's the only way you can access the server.

Resources