Get whole domain content with wget or other commands in linux? - linux

I would like to copy a client project but I only have FTP-access. Normally I'd do it with SSH-access, but in this case it's not possible. The problem is the size of the project (nearly 3GB)
Is there a solution to copy the project to my server only with FTP-access?

The size isn't the problem here. Because of the encryption a SSH upload produces much more overhead than a FTP upload, therefore the answer is: Of course you can use FTP for file uploads, even if they large uploads. FTP was meant for this.
The more important concern is security. If you are normally using SSH for file uploads you'll for sure having security in mind (because FTP would been faster than SSH). If your provider does support SFTP you could use it as an alternative.

Related

How to securely host file on RHEL server and enable download for user

I have programmed an application that users can use to process genome data. This application relies on a 10GB database file, that users have to download in order to run the application. At the moment, I have stored this file on Google Drive, but the download bandwith is limited, so if a number of users download the file on a certain day, it will not work for others and they will get errors running the application.
My solution would be to host the file on our research server, create a user that only has access rights to this folder and nothing else, and make the file downloadable from the server via scp within the application (which is open source) through that user.
My question now is, is this safe to do or are people potentially able to hack into our server? If this method would be a security risk, what would be a better way to provide this file?
Thank you in advance!
Aloha
You can setup something like free Seafile https://www.seafile.com/en/home/, or ask the admin to set it up for you which is pretty secure like a self hosted google drive with 2fa authentication.
Another nice and easy tool is Filebrowser on github (https://github.com/filebrowser/filebrowser)
I would not really advice giving people shell/scp access inside your network.
And hosting anything inside a company network is in general not wisest idea, there is a always a risk involved.
I would setup a Seafile/filebrowser solution at a cheap rented server outside your network and upload it there. Or if you have a small pc left set it up in a DMZ Zone, a zone that has special access restrictions inside your company.
You want to use SSH (scp) as a transportation and authentication method for file hosting. It's possible to keep this safe with caution. For example, GitHub uses SSH for transport when providing git access with the git+ssh protocol.
Now for the caution part, if you haven't done it before, it's not a trivial task.
The proper way to achieve this would be set up an isolated SSH server in a chroot environment, and set up an SSH user on this isolated SSH instance only (not a user in the system that is added by eg useradd). Then you can add the files that's absolutely necessary to the chroot, and provide SSH access to users.
(Nowadays you might want to consider using Linux filesystem namespaces, if applicable, to replace chroot, but I'm not sure on this.)
As for other options, setting up a simple Nginx server for static file hosting might be a lot easier, provided you have some understanding of HTTP and TLS. There're lots of writings on the Internet about this.
Both ways, if you are to expose your server to the Internet or Intranet, you need to make sure of firewalling. Consider to learn about nftables or firewalld or the like, if you haven't already.
SSH is reasonably safe. Always keep software up-to-date.
Set up an sftp-only user with chrooted directory. In /etc/ssh/sshd_config:
Match User MyUser
ChrootDirectory /var/ssh/chroot
ForceCommand internal-sftp
AllowTcpForwarding no
PermitTunnel no
X11Forwarding no
This user will not get a shell (because of internal-sftp), and cannot see files outside of /var/ssh/chroot.
Use a certificate client-side, additional to password.
Good description of the setup process for certificates:
https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server
Your solution is moderately safe.
A better solution is to put it on a server accessible via sftp, behind a password, but also encrypt the file: in this way you introduce a double layer of protection.
On a Linux server you should be able to use a tool like gpg to encrypt your file.
Next you share the decryption key with your partners using a secure channel with e.g. an end2end encrypted messaging software.

Remote directory compare and merge without ssh

I have 2 remote servers/machine say s1 and s2 (linux based machines)
Both the server has 1 directory which is very huge. (i mean initially same data in both machines)
s1 is always stable upto date, changes are added by authorized user.
s2 people will make changes to the data here and there.
now requirement is to make content of s2 to inSync with s1.
Condition:
1. No replacement of s2 content with s1 because data is very huge
2. No other software allowed to install in machines
3. Only scp, sftp supported, no ssh or any other sort of access is given because it is production machine.
If anybody come across this sort of requirement Please suggest me any tool, any way to do this task.
If, you say, you have scp, then you must also have ssh. scp requires ssh to work. So, I'll start by challenging your assumption that you can't use rsync over ssh. If you have scp working, then there's no reason why rsync over ssh should not work.
rsync over ssh is the correct answer here. This is the most efficient mechanism for synchronizing content between two different servers. But, I suppose that it's possible that someone who thinks he knows what he's doing, but he really doesn't, hacked up a server to allow only the scp service, and block ssh sessions. Probably under a mistaken notion that this improves security somehow. It really doesn't, but that's a different topic. So, what now...
Well, you say you do have sftp access available. In that case, the next best answer would be a custom sftp client. Learn perl, and use the Net::SFTP module to write a custom perl script, for your specific requirement, to use SFTP to compare the contents of the two servers, and synchronize their contents.
Net::SFTP exposes the underlying SFTP protocol in a way that allows one to write custom applications that uses it. You'll use the SFTP protocol to examine the contents of each server, figure out what's different, then copy what needs to be copied, in order to update their contents.
Using Net::SFTP won't be as efficient as using rsync+ssh. With Net::SFTP, you'll know which files exist on the server, and the size of each file in bytes. However, if both servers appear to have a file with the same name, and the same byte count, you don't really know whether they are, in fact, identical, without downloading each file, and manually comparing them. You'll have to do that, of course. This is the key advantage of rsync+ssh that you do not have an sftp equivalent of. The rsync server works together with the rsync client, and they're able to verify that the file contents are identical, using checksums, without actually transferring the file from one side to another. No way to avoid doing that with sftp in this case, but this is going to be the best you'll be able to do.
If you decide to go the Perl route, don't use Net::SFTP which is an old an unmaintained module. Instead go for Net::SFTP::Foreign that BTW, implements recursive downloads allowing you to select which files to get on the fly, so you can easily do an update.
Another alternative is to use the development version of my other module Net::SSH::Any that has a built-in scp client that is able to download only the files that are newer on the remote side:
my $ssh = Net::SSH::Any->new(...);
$ssh->scp_get( { recursive => 1,
update => 1 },
$remote_dir, $local_dir );
Other scripting languages like Python or Ruby also have SFTP and SCP libraries.

How to move a website without downloading and then uploading

Please forgive me if this has been asked but I could not find an answer.
How can one move a website from one server to another without having to download the files (zipped or otherwise) and then uploading to the new server?
I suspect this is possible via ftp, but am not sure how to do so.
Thank you
In standard of FTP protocol is this function. But in my experience this is hard to configure for work and not any servers support this feature.
More simple is use SSH. If you have SSH access to any of this servers then you can login to shell on one server and transfer files from/to other server via FTP.

Migrate data from one server to another

I bought a new server and I want to move all the data (directories, sub directories, users, passwords, ..etc) from my old server to it.
Is there a way to do that?
Thanks,
Do you have physical access to both servers? If so you can use the dd command to make a clone of the disk from the old server to the disk that is going into the new server.
In order to do this though, both hard drives have to be installed in one of the servers.
You can also use netcat and dd to clone a disk over a network.
for the directories and files, use a FTP client from your server, if it allows you to, if not, just download all the content to your computer and upload it to the new server.
For the users and passwords, i guess they are in a Database, connect to the database using SSH, telnet, or MysqlAdmin or any RMDB client system and export a dump file, then log in to the new server's SQL system and import that dump file.
Anyway you should give more details of both servers anyway so we can help you, for example, are they Shared hosting or dedicated machine? and what kind of access do you have to them, also, their operative system would help people to reply you accurately
In principle, yes.
If the hardware is similar (= just more RAM, disk space but same CPU architecture and no special graphics card drivers), you might be able to copy every file and then install the boot loader once more (the boot loader config usually changes when the hard disk size changes).
Or you can create a list of all services that you use, determine which config files each one uses and then just copy those. Ideally, you shouldn't copy them but compare the old and the new versions and merge them.
The most work intensive way is to use a tool like puppet. In a nutshell, puppet allows to create install scripts for services (along with all the configuration that you need). So if you need to install a service again (new hardware, second server), you just tell puppet to do it. On the plus side, your whole installation will be documented, too. If you ever wonder why something is the way it is, you can look into the puppet files.
Of course, this approach takes a lot of time and discipline, so it might not be worth it in your case. Apply common sense.

Linux: Uploading files to a live server - How to automate process?

I'm developing on my local machine (apache2, php, mysql). When I want to upload files to my live server (nginx, mysql, php5-fpm), I first backup my www folder, extract the databases, scp everything to my server (which is tedious, because it's protected with opiekey), log myself in, copy the files from my home directory on the server to my www directory and if I'm lucky and the file permissions and everything else works out, I can view the changes online. If I'm unlucky I'll have to research what went wrong.
Today, I changed only one file, and had to go through the entire process just for this file. You can imagine how annoying that is. Is there a faster way to do this? A way to automate it all? Maybe something like "commit" in SVN and off you fly?
How do you guys handle these types of things?
PS: I'm very very new to all this, so bear with me! For example I'm always copying files into my home directory on the server, because scp cannot seem to copy them directly into the /var/www folder?!
There are many utilities which will do that for you. If you know python, try fabric. If you know ruby, you may prefer capistrano. They allow you to script both local and remote operations.
If you have a farm of servers to take care of, those two might not work at the scale you want. For over 10 servers, have a look at chef or puppet to manage your servers completely.
Whether you deploy from local checkout, packaged source (my preferred solution), remote repository, or something entirely different is up to you. Whatever works for you is ok. Just make sure your deployments are reproducible (that is you can always say "5 minutes ago it wasn't broken, I want to have what now what I had 5 minutes ago"). Whatever way of versioning you use is better than no versioning (tagged releases are probably the most comfortable).
I think the "SVN" approach is very close to what you really want. You make a cron job that will run "svn update" every few minutes (or hg pull -u if using mercurial, similar with git). Another option is to use dropbox (we use it for our web servers sometimes) - this one is very easy to setyp and share with non-developers (like UI designers)...
rsync will send only the changes between your local machine and the remote machine. It would be an alternative to scp. You can look into how to set it up to do what you need.
You can't copy to /var/www because the credentials you're using to log in for the copy session doesn't have access to write on /var/www. Assuming you have root access, change the group (chown) on /var/www (or better yet, a sub directory) to your group and change the permissions to allow your group write access (chmod g+w).
rsync is fairly lightweight, so it should be simple to get going.

Resources