Secure, Private, Local Gitorious - linux

I want to have a local Gitorious installation that cannot be accessed outside of my local network, and is as secure and private as possible. The repos will be holding code I need kept private and secure in case of hacking or theft.
I'm not an expert with Linux, and certainly not an expert with git/gitorious, so any tips for improving my installation described below would be most helpful!
I have:
Installed Gitorious on a local machine running Ubuntu Server 11.04 64-bit, with an encrypted LVM.
Used this guide for Gitorious installation, if anyone is curious.
Modified Gitorious to support local IPs as hostnames.
In gitorious.yml:
host fields are a local IP (e.g. 192.168.xxx.xxx)
public_mode: false
only_site_admins_can_create_profiles: true
hide_http_clone_urls: true
git-daemon was installed, but is now removed.
No ports forwarded by internet facing router to machine.
Both git:// based and http:// based requests would normally allow open cloning of repos. Removing git-daemon and setting hide_http_clone_urls to false seems to have disabled both. They both deliver errors now when I attempt to clone.
With an encrypted LVM the machine is secure in case of physical theft. Also, all cloned repos on other machines are kept on encrypted drives as well. I used a custom script on the encrypted LVM that fills the harddrive with porn in case of too many failed attempts.
My current concerns:
Is repo access through git:// and http:// fully disabled?
Are all avenues of repo access secured behind ssh now?
Is there a way to block all requests to the machine that don't originate from within the local network, in case my router gets angry and seeks revenge against me?
Anything more I can do to encrypt or protect the repos in case something goes wrong?
How do I backup gitorious's data? Just backup the MySQL database and repos directory?
Thank you.

If your git-daemon is not running then no git:// access.
hide_http_clone_urls does not disable http, it just does not show the link. To protect it from unauthorized access, you might want to block on apache/nginx all access to git.yourdomain.com.
You can take a look at my debian package, that have many default configurations, better then the documentations available on the internet:
https://gitorious.org/gitorious-for-debian/gitorious/
the base folder is where all configurations is stored, like apache configs and others, there is also the shell scripts that make default users and other things, just explore the source tree.
being more specific about the apache config, take a look here: https://gitorious.org/gitorious-for-debian/gitorious/blobs/master/base/debian/etc/apache2/sites-available/gitorious
If, for example, you don't add the git.yourserver.com alias, then no one should be able to git clone from http.
You might also want to watch and support the private repositories feature that are planned, which will provide real, safe, control of who can see what.
Also for the question about ssh, I can say that, yes, it's safe and will only give access to who have a public key registered on your gitorious installation.
About the requests question, you could take a look at apache allow, deny rules, where you can create something like:
Deny from All
Allow from 192.168.0
For backup, you have to backup your repository folder and mysql databases.

Related

How to securely host file on RHEL server and enable download for user

I have programmed an application that users can use to process genome data. This application relies on a 10GB database file, that users have to download in order to run the application. At the moment, I have stored this file on Google Drive, but the download bandwith is limited, so if a number of users download the file on a certain day, it will not work for others and they will get errors running the application.
My solution would be to host the file on our research server, create a user that only has access rights to this folder and nothing else, and make the file downloadable from the server via scp within the application (which is open source) through that user.
My question now is, is this safe to do or are people potentially able to hack into our server? If this method would be a security risk, what would be a better way to provide this file?
Thank you in advance!
Aloha
You can setup something like free Seafile https://www.seafile.com/en/home/, or ask the admin to set it up for you which is pretty secure like a self hosted google drive with 2fa authentication.
Another nice and easy tool is Filebrowser on github (https://github.com/filebrowser/filebrowser)
I would not really advice giving people shell/scp access inside your network.
And hosting anything inside a company network is in general not wisest idea, there is a always a risk involved.
I would setup a Seafile/filebrowser solution at a cheap rented server outside your network and upload it there. Or if you have a small pc left set it up in a DMZ Zone, a zone that has special access restrictions inside your company.
You want to use SSH (scp) as a transportation and authentication method for file hosting. It's possible to keep this safe with caution. For example, GitHub uses SSH for transport when providing git access with the git+ssh protocol.
Now for the caution part, if you haven't done it before, it's not a trivial task.
The proper way to achieve this would be set up an isolated SSH server in a chroot environment, and set up an SSH user on this isolated SSH instance only (not a user in the system that is added by eg useradd). Then you can add the files that's absolutely necessary to the chroot, and provide SSH access to users.
(Nowadays you might want to consider using Linux filesystem namespaces, if applicable, to replace chroot, but I'm not sure on this.)
As for other options, setting up a simple Nginx server for static file hosting might be a lot easier, provided you have some understanding of HTTP and TLS. There're lots of writings on the Internet about this.
Both ways, if you are to expose your server to the Internet or Intranet, you need to make sure of firewalling. Consider to learn about nftables or firewalld or the like, if you haven't already.
SSH is reasonably safe. Always keep software up-to-date.
Set up an sftp-only user with chrooted directory. In /etc/ssh/sshd_config:
Match User MyUser
ChrootDirectory /var/ssh/chroot
ForceCommand internal-sftp
AllowTcpForwarding no
PermitTunnel no
X11Forwarding no
This user will not get a shell (because of internal-sftp), and cannot see files outside of /var/ssh/chroot.
Use a certificate client-side, additional to password.
Good description of the setup process for certificates:
https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server
Your solution is moderately safe.
A better solution is to put it on a server accessible via sftp, behind a password, but also encrypt the file: in this way you introduce a double layer of protection.
On a Linux server you should be able to use a tool like gpg to encrypt your file.
Next you share the decryption key with your partners using a secure channel with e.g. an end2end encrypted messaging software.

Host securely other people on apache2

For the context : I'm a student and I must do a project with some other people of my class. My role is to prepare them a web server that each one can use and access from anywhere. I plan to host everything on a dedicated server that I already have to avoid additional cost and give to each people a subdomain that will be redirected with VirtualHosts. They will be able to send files to the server with a SFTP server (openssh), they will get an account per person and it will be chrooted to their virtualhost directory.
My main problem : Will this be secure ? I mean, if one of the user set an easy password or just do anything risky, can someone access the other's people virtualhost or even the host dedicated machine ? I already thought about .htaccess and they will be deactivated. Is there another way to get out of an apache virtualhost ?
Things to note : they will have apache, php and an access to a mysql (or maybe mariadb, I don't know for now) database. So, they may be able to upload some old, unsecure code. Some of these users are not very educated to cybersecurity.
The server is a Ubuntu 16.04 LTS.
Thanks for the advices,
If you limit their access to only their own home directory, that's a good start.
A good layer of security would also be to implement 2FA, check out Duo Mobile, you can implement it for SSH logins (or need more details, eg. what options do they have to login into the server?)
If the users are not very educated in cybersecurity as you mentioned, it will be difficult for them to escape the virtual host they have access to.
Although i need more details such as each virtual host will have a separate database or it will be talking to a central database? also, for a paranoid measure, consider where the server is hosted. There are lots of variables that can be affirmed from what you described, but it is best to keep the server on its own network with nothing critical in the same subnet. Just in case.

Restricting remote communication in Git

I have Git installed in our company's Linux server. All the developers work on the same server. Recently we had to move to Git, which is hosted on some other server. Before we create a Git repository we create SSH keys and then start ssh-agent and finally add the private key using ssh-add.
My problem is I created a Git repository in the Linux machine, set my keys and everything and also did a push to remote Git server. But if some other developer also has his key added he can also perform a git push on my local repository.
Is there any way I can restrict push by other developers on the same Linux machine?
If you want to prevent others from pushing to your personal development machine, set up a firewall. If you want to prevent people from pushing to remote server, remove their keys, or add per-ip firewall rules (so that they can still use SSH). At least that's what I'd do, since it looks like the git itself doesn't offer any access control facilities and leaves it to the OS/networking layer.
In any case, my opinion is that rather than setting up some security facilities, you should trust your coworkers not to screw things up. After all, it's not some public repository - it's a company, where screw ups (intentional or not) should be dealt with accordingly.

Is Mercurial Server a must for using Mercurial?

I am trying to pick a version control software for our team but I don't have much experience for it before. After searching and googling, it seems Mercurial is a good try. However, I am a little bit confused about some general information about it. Basically, our team only have 5 people and we all connect to a server machine which will be used to store the repositories. The server is a Redhat Linux system. We probably use a lot of the centralized workflow. Because I like the local commit idea, I still prefer the DVCS kind software. Now I am trying to install mercurial. Here are my questions.
1) Does the server used for repositories always need to be installed the software "mercurial-server "? Or it depends on what kind of workflow it uses ? In other words, is it true if there is no centralized workflow used for works, then the server can be installed by "mercurial client" ?
I am confused about the term "mercurial-server". Or it means the mercurial installed on the server is always called "mercurial server" and it does matter if it is centralized or not. In addition, because we all work on that server, does it mean only one copy of mercurial is required to install there ? We all have our own user directory such as /home/Cassie, /home/John,... and /home/Joe.
2) Is SSH a must ? Or it depends on what kind of connection between users and the server ? So since we all work in the server, the SSH is not required right ?
Thank you very much,
There are two things that can be called a "mercurial server".
One is simply a social convention that "repository X on the shared drive is our common repository". You can safely push and pull to that mounted repository and use it as a common "trunk" for your development.
A second might be particular software that allows mercurial to connect remotely. There are many options for setting this up yourself, as well as options for other remote hosting.
Take a look at the first link for a list of the different connection options. But as a specific answer to #2: No, you don't need to use SSH, but it's often the simplest option if you're in an environment using it anyways.
The term that you probably want to use, rather than "mercurial server", is "remote repository". This term is used to describe the "other repository" (the one you're not executing the command from) for push/pull/clone/incoming/outgoing/others-that-i'm-forgetting commands. The remote repository can be either another repository on the same disk, or something over a network.
Typically you use one shared repository to share the code between different developers. While you don't need it technically, it has the advantage that it is easier to synchronize when there is a single spot for the fresh software.
In the simplest case this can be a repository on a simple file share where file locking is possible (NFS or SMB), where each developer has write access. In this scenario there is no need to have mercurial installed on the server, but there are drawbacks:
Every developer must have a mercurial version installed, which can handle the repo version on the share (as an example, when the repo on the share is created with mercurial 1.9, a developer with 1.3 can't access this repo)
Every developer can issue destructive operations on the shared repo, including the deletion of the whole repo.
You can't reliably run hooks on such a repo, since the hooks are executed on the developer machines, and not on the server
I suggest to use the http or ssh method. You need to have mercurial installed on the server for this (I'm not taking the http-static method into account, since you can't push into a http-static path), and get the following advantages:
the mercurial version on the server does not need to be the same as the clients, since mercurial uses a version-independent wire protocol
you can't perform destructive operations via these protocols (you can only append new revisions to a remote repo, but never remove any of them)
The decision between http and ssh depends on you local network environment. http has the advantage that it bypasses many corporate firewalls, but you need to take care about secure authentication when you want to push stuff over http back into the server (or don't want everybody to see the content). On the other hand ssh has the drawback that you might need to secure the server, so that the clients can't run arbitrary programs there (it depends on how trustworthy your clients are).
I second Rudi's answer that you should use http or ssh access to the main repository (we use http at work).
I want to address your question about "mercurial-server".
The basic Mercurial software does offer three server modes:
Using hg serve; this serves a single repository, and I think it's more used for quick hacks (when the main server is down, and you need to pull some changes from a colleague, for example).
Using hgwebdir.cgi; this is a cgi script that can be used with an HTTP server such as Apache; it can serve multiple repositories.
Using ssh (Secure Shell) access; I don't know much about it, but I believe that it is more difficult to set up than the hgwebdir variant
There is also a separate software package called "mercurial-server". This is provided by a different company; its homepage is http://www.lshift.net/mercurial-server.html. As far as I can tell, this is a management interface for option 3, the mercurial ssh server.
So, no, you don't need to have mercurial-server installed; the mercurial package already provides a server.

FTP configuration for WordPress

I've installed a WordPress instance on a Linux server, and I need to give it FTP access in order to install plugins and execute automatic backup/restores. I've just installed vsftpd, and started the service, but now what?
How do I figure out/set what the username/pass is?
Should I allow anonymous access?
Is the hostname just 'localhost'?
Any advice would be appreciated. I've never messed with FTP on linux before. Thanks-
Your question is a little unclear because you don't specify what aspect of wordpress "wants" FTP access. If you got WP installed, you clearly have at least some access to the machine already. That said, I'll try to answer around that inclarity.
Your questions in order, then some general thoughts:
How do I figure out/set what the username/pass is?
Remember that the man page for a program is a good first stop. A good man page will also contain a FILES or "SEE ALSO" section near the bottom that will point you to relevant config files.
In this case, "man vsftpd" mentions /etc/vsftpd.conf, so you can then do "man vsftpd.conf" to get info on how to configure it.
VSFTPD is configurable, and can allow users to log in in several ways. In the man page, check out "guest_enable" and "guest_username", "local_enable" and "user_sub_token".
*The easiest route for your single user usage is probably configuring local_enable, then your username and password would be whatever it is in /etc/password.*
Should I allow anonymous access?
No. Since you're using this to admin your Wordpress, there's no reason anyone else should be using this FTP. VSFTPD has this off by default.
Is the hostname just 'localhost'?
Depends where you're coming from. 'localhost' maps back to the loopback, or the same physical machine you're on. So if you need to put ftp configuration information for Server A into a wordpress configuration file on Server A, then 'localhost' is perfectly acceptable. If you're trying to configure the pasv_addr_resolve/pasv_addr flag of VSFTPD, then no, you'll want to either pass in the fully qualified name of Server A (serverA.mydomain.com), or leave it off an rely on the IP address.
EDIT: I actually forgot the critical disclaimer to never send credentials over plain FTP. Plain old FTP (meaning not SFTP) sends your username and password in cleartext. I didn't install VSFTP and play with it, but you'll want to make sure that there is some form of encryption happening when you connect. Try hitting it with WinSCP (from windows) or sftp (from linux) to make sure you're getting an ecrypted SFTP, rather than plaintext FTP.
Apologies if you already knew that ;)
You would probably get better answers on server fault.
That said:
vsftp should use your local users by default, and drop you in that user's home directory on login.
disable anonymous access if you don't need it, I don't think wordpress will care but your server will be safer.
yes, or 127.0.0.1, or your public IP if you think you might split the front and back end some day.
WordPress does not natively support SFTP. You can get around this two ways:
chmod permissions in the appropriate directories to allow the normal, automatic update to work correctly. This is the approach most certain to work, as long as it doesn't trip over any local security policies.
Try hacking it in yourself. There have been any number of threads on this at the WordPress.org forums. Here is a recent one which is also talking about non-standard ports. Here is an article about how to try to get it working on Debian Lenny (which also addresses the non-standard port issue).

Resources