I'd like to set up a cheap Linux box as a web server to host a variety of web technologies (PHP & Java EE come to mind, but I'd like to experiment with Ruby or Python in the future as well).
I'm fairly versed in setting up Tomcat to run on Linux for serving up Java EE applications, but I'd like to be able to open this server up, even just so I can create some tools I can use while I am working in the office. All the experience I've had with configuring Java EE sites has all been for intranet applications where we were told not to focus on securing the pages for external users.
What is your advice on setting up a personal Linux web server in a secure enough way to open it up for external traffic?
This article has some of the best ways to lock things down:
http://www.petefreitag.com/item/505.cfm
Some highlights:
Make sure no one can browse the directories
Make sure only root has write privileges to everything, and only root has read privileges to certain config files
Run mod_security
The article also takes some pointers from this book:
Apache Securiy (O'Reilly Press)
As far as distros, I've run Debain and Ubuntu, but it just depends on how much you want to do. I ran Debian with no X and just ssh'd into it whenever i needed anything. That is a simple way to keep overhead down. Or Ubuntu has some nice GUI things that make it easy to control Apache/MySQL/PHP.
It's important to follow security best practices wherever possible, but you don't want to make things unduly difficult for yourself or lose sleep worrying about keeping up with the latest exploits. In my experience, there are two key things that can help keep your personal server secure enough to throw up on the internet while retaining your sanity:
1) Security through obscurity
Needless to say, relying on this in the 'real world' is a bad idea and not to be entertained. But that's because in the real world, baddies know what's there and that there's loot to be had.
On a personal server, the majority of 'attacks' you'll suffer will simply be automated sweeps from machines that have already been compromised, looking for default installations of products known to be vulnerable. If your server doesn't offer up anything enticing on the default ports or in the default locations, the automated attacker will move on. Therefore, if you're going to run a ssh server, put it on a non-standard port (>1024) and it's likely it will never be found. If you can get away with this technique for your web server then great, shift that to an obscure port too.
2) Package management
Don't compile and install Apache or sshd from source yourself unless you absolutely have to. If you do, you're taking on the responsibility of keeping up-to-date with the latest security patches. Let the nice package maintainers from Linux distros such as Debian or Ubuntu do the work for you. Install from the distro's precompiled packages, and staying current becomes a matter of issuing the occasional apt-get update && apt-get -u dist-upgrade command, or using whatever fancy GUI tool Ubuntu provides.
One thing you should be sure to consider is what ports are open to the world. I personally just open port 22 for SSH and port 123 for ntpd. But if you open port 80 (http) or ftp make sure you learn to know at least what you are serving to the world and who can do what with that. I don't know a lot about ftp, but there are millions of great Apache tutorials just a Google search away.
Bit-Tech.Net ran a couple of articles on how to setup a home server using linux. Here are the links:
Article 1
Article 2
Hope those are of some help.
#svrist mentioned EC2. EC2 provides an API for opening and closing ports remotely. This way, you can keep your box running. If you need to give a demo from a coffee shop or a client's office, you can grab your IP and add it to the ACL.
Its safe and secure if you keep your voice down about it (i.e., rarely will someone come after your home server if you're just hosting a glorified webroot on a home connection) and your wits up about your configuration (i.e., avoid using root for everything, make sure you keep your software up to date).
On that note, albeit this thread will potentially dwindle down to just flaming, my suggestion for your personal server is to stick to anything Ubuntu (get Ubuntu Server here); in my experience, the quickest to get answers from whence asking questions on forums (not sure what to say about uptake though).
My home server security BTW kinda benefits (I think, or I like to think) from not having a static IP (runs on DynDNS).
Good luck!
/mp
Be careful about opening the SSH port to the wild. If you do, make sure to disable root logins (you can always su or sudo once you get in) and consider more aggressive authentication methods within reason. I saw a huge dictionary attack in my server logs one weekend going after my SSH server from a DynDNS home IP server.
That being said, it's really awesome to be able to get to your home shell from work or away... and adding on the fact that you can use SFTP over the same port, I couldn't imagine life without it. =)
You could consider an EC2 instance from Amazon. That way you can easily test out "stuff" without messing with production. And only pay for the space,time and bandwidth you use.
If you do run a Linux server from home, install ossec on it for a nice lightweight IDS that works really well.
[EDIT]
As a side note, make sure that you do not run afoul of your ISP's Acceptable Use Policy and that they allow incoming connections on standard ports. The ISP I used to work for had it written in their terms that you could be disconnected for running servers over port 80/25 unless you were on a business-class account. While we didn't actively block those ports (we didn't care unless it was causing a problem) some ISPs don't allow any traffic over port 80 or 25 so you will have to use alternate ports.
If you're going to do this, spend a bit of money and at the least buy a dedicated router/firewall with a separate DMZ port. You'll want to firewall off your internal network from your server so that when (not if!) your web server is compromised, your internal network isn't immediately vulnerable as well.
There are plenty of ways to do this that will work just fine. I would usually jsut use a .htaccess file. Quick to set up and secure enough . Probably not the best option but it works for me. I wouldn't put my credit card numbers behind it but other than that I dont really care.
Wow, you're opening up a can of worms as soon as you start opening anything up to external traffic. Keep in mind that what you consider an experimental server, almost like a sacrificial lamb, is also easy pickings for people looking to do bad things with your network and resources.
Your whole approach to an externally-available server should be very conservative and thorough. It starts with simple things like firewall policies, includes the underlying OS (keeping it patched, configuring it for security, etc.) and involves every layer of every stack you'll be using. There isn't a simple answer or recipe, I'm afraid.
If you want to experiment, you'll do much better to keep the server private and use a VPN if you need to work on it remotely.
Related
I am running a Java minecraft server on my Linux server. I have been asked to install some mods (e.g. data packs) on the server which appear to contain Java code written by a third party other than Mojang.
Is this Java code restricted in what it can do, or can it run any arbitrary code it likes (e.g. read /etc/passwd, open TCP ports, claim huge amounts of memory, etc.)?
In other words, how risky are minecraft mods containing Java code?
This is a very good question. I actually was wondering that too. I have spent a bit of time looking at the binaries of minecraft and the minecraft server spigot. I assume we both use the Java Edition.
First of all, the Java code, once you run it on a host, can do anything. The only mechanism that can prevent that and protect the user from the developer in Java, is the security manager. Once you turn on the security manager (which is an opt-in mechanism) you have the ability to define a set of rules that Java will obey like e.g. it will not write into directories you don't allow it too.
So the question is: is minecraft using the security manager per default. I am 99% sure it does not. No one is using the security manager because it is a pain to configure it right and things stop working every time you get it wrong (you know that an applications uses the security manager because you face problems with policy misconfiguration every now and then).
Running minecraft is made by running an exe. I would not now where to turn the security manager on even if I would like too. There is a bit of hope with the spigot server. You can install the security manager with -Djava.security.manager and input your policy with -Djava.security.policy==my.policy. But getting the policy right will be pain. I will try to look into it though when I have a free week or so.
Minecraft mods can act with the full permissions of the user running Minecraft. If you don't trust the author of a mod, then there's basically two approaches to safely use it:
Audit the source code yourself, and then compile the mod yourself so you know it matches the binary
Run Minecraft in a sandbox such as a VM, or as a heavily restricted user.
I have a win box(clean, no bloat, no node, no servers) that I develop with, and incidentally have cygwin on it. I also have an arch linux server fully configured like a dream, the way i like it, and even use putty on the win box for it. I would love to use the resoures on the linux for this, however the problem is i spend too much time on nginx, php-fpm and crap like that on the server, to keep a proper dns name dialed in to have proper dns accessible names to map the browser on the dev machine to the server, normally, when i need it.
Im willing to break the pattern, to stab at a quick solution, since this comes up so often for me, but i want the easy option, i thought i ask opinions.
-What i need is a way to access the node server, any node server for that matter, from the win box browsers. that's my main requirement.
-Secondarily, i need to access git, on the server, for repo storage, and preferable even work on the files out of there as \\hostname\projects\site\index.js etc.. on the winbox.
-I prefer NOT to use git through any kind of start menu, or icon, i would hate that, im a command line guy.
Existing
win development, want to work on a node app, arch box on 192.168 subnet with working node, no dns mapped (can add to etc/hosts, but to have the linux box capture that dns name too much work for now)
Option 1
use cygwin right here, install node on it, go to town on development, but i want to use the git repo/git on the arch linux box somehow still, i wont install git, or nodejs on windows per say, only through command line, choclatey maybe, but preferable cygwin, if there is such a thing, i just havent used it before really.
Option 2
Whats available for me to map something easy to the linux, and use the resources available there and putty, e.g. do i need a quick dns solution or what am i looking for? (dont suggest bind or dnsmasq please) i much prefer bind, have it on there, but dont want to get that dialed in, just want to spend an hour on this each time i need to work on a website, i need something quick.
What about a proxy, if i point my browsers to proxy to the ip of the server? I dont really mind using ips, as long as the site allow it.
suggestions?
There is nothing wrong with dnsmasq. Its wayyy simpler than Bind, you just put names in etc/hosts. For the Windows machine, install Virtual Box and Ubuntu. Not sure cygwin works at all with Node but it would probably suck compared to Virtual Box.
There is no simple Linux DNS that I know of besides dnsmasq. nsd is not bad but its still a pain in the ass. There might be an easy to setup Windows DNS server though. But I would just use Virtual Box and dnsmasq.
On Windows the hosts file is normally in \WINDOWS\system32\drivers\etc
I've built an application that run with node.js, which permit to retrieve some data through a REST API.
I want to put it online on a personal computer (Windows), but I have no idea how to install a server and what I need to make my application available online.
Can someone explain me the steps to do it ? I know that some online services exists like Heroku but I want to do it by myself.
Thank you
This question looks small, but it's actually huge. I started writing this as a basic guide, and it ended up really being quite a lengthy answer, so I split it into pieces. Overall hope this helps!
Using a VPS
You don't want to serve a website from your personal computer, because any time your computer is off, the website will be down. You don't want that kind of responsibility with your computer, so much of the time people choose to essentially rent server space from companies that's sole purpose is to get you space/bandwidth on a simple computer that is always on. These are often called VPS's (virtual private servers).
So the first step I'd recommend is to grab a VPS for yourself. Digital Ocean is a great service that you can get a solid server from for $5/month, I would recommend starting there. There are bunches of other companies you can get VPS's from though if you prefer, probably the most popular alterntive being linode.
Once you've got yourself a VPS, log in to it using ssh. Usually it will look something like this:
ssh root#000.000.0000
...with the number at the end being the IP address of your server. Most VPS's are some flavor of linux, so being familiar with the linux command line interface is important. Once you're all set in your server, you'll want to do a few things. This is what I usually do, in order:
Install vim
For me, vim is the easiest way to edit files through the command line. This certainly might not be the case for everyone - some people prefer emacs, and some nano, which is a lot simpler. If you are interested in learning about vim, there are loads of tutorials around the 'net. If getting into vim isn't your thing, I'd recommend using nano instead wherever I mention it from here on.
To get it installed, we can use apt, which is aptitude, the package manager on ubuntu, the flavor of linux I'll use in this answer since it's a popular one for servers, and is the default for digital ocean. Just run apt-get update to make sure packages are up to date, then apt-get install vim to put in vim.
Add your ssh key
Add your ssh key to ~/.ssh/authorized_keys so that you don't need a password to log in. If you are unfamiliar with ssh keys, they are basically a pair of cryptographic keys that you can use to avoid needing to authorize with a password every time. By adding your public key to the ~/.ssh/authorized_keys file, you are essentially telling the server "this is my computer, so you don't need to ask me for a password to log in". Github has a great guide on how to generate keys. Once this is done, you can open the file with vim, get into insert mode, and paste the public key in from your local machine. Save and quit and you're set.
Install node.js
If you are trying to run a node app, you will of course need to have node! Installing node on linux is a bit different because the node installer I'm sure you used locally is graphical, where here you only have the command line. Luckily, it's not much more difficult with this set of instructions, which you can follow exactly. Make sure you do not just do the default apt-get install nodejs, as this will install an old version. Take the couple steps after the second paragraph to add ppa and get a newer version.
Deploy your app
Ok, so you have a machine that has node and theoretically could run your app. This is good news. Now we need to actually get the app onto the machine. There are a few ways you can do this. If you have ruby installed locally, you can use capistrano, a popular deployment solution. A lighter weight approach that I often prefer is deploy, although I don't think that will work on windows. You can also just use github or bitbucket - push your app to a remote repo then clone it down from your VPS (make sure to apt-get install git and set up your username first - if it's a private repo you'll probably have generate and add a key to get access to pull it down). However you manage to do it, get the files transferred.
Test your app
On your VPS, cd into wherever your app was put and run it. Make sure everything is working ok, and hit http://YOUR_IP:PORT, just your ip address followed by a port number that your app is running on after the colon. You should be able to see your app. If not check back to the terminal, it may have crashed. Sometimes you can find flukes when you are setting it up on a different system. If your app uses a database, you might need to get this configured too. You can google "ubuntu setup database name" and find some tutorials -- digital ocean has a pretty solid library of these types of tutorials themselves.
Install nginx
Nginx is a great way to serve multiple apps on one machine, and to handle domain names and such. I wrote an article on how to set up nginx that you can check out to learn the basics and get it installed. Once this is done, you can link up your app with a proxy_pass. Rather than try_files, which is what the article does to server static files, just drop in a proxy_pass statement to the port your app is running on instead, and nginx will direct traffic right through to your app. Here's an example, if you had your app running on port 1234 and your domain name was example.com
server {
server_name example.com;
location / {
proxy_pass http://localhost:1234;
}
}
This will just take traffic coming into the box from example.com and pass it to your app, which is awesome.
Get your domain in order
I have to assume you don't want to require people to use an IP address to access your app, and you want a domain name. Go grab one from wherever, and once you have this you need to edit the DNS records. I've found that it's easiest to use dnsimple for this, as not every domain registrar has solid dns record handling, and you can keep all your dns management in one place. Now, just put an A record on the root of your domain, pointing it to your VPS's IP address. After giving it a couple minutes for the records to propigate, a hit to that domain should go directly to your server - fantastic.
Now is the time to check through and make sure that your app is running properly and that your nginx configuration is correct (and that you have reloaded nginx). Make sure that in your configuration, the server_name mirrors the domain you set to point at your VPS. Make sure the port in the proxy_pass is the same as your app is running on. Once this has been confirmed, go to the domain, and if you did it right, your app will come up. Whoo!
Run it on a production server
Great, so we got our app running and it's online on the internet for the public to enjoy. Just about time to sit back and let everyone throw money at you, a common occurance whenever you get a site shipped. But don't recline too quickly, because the last thing we need is to make sure this app stays up and continues running even if something goes wrong, or you log out of your VPS, so you don't always have to keep a terminal window open running the app. For this, we can use what some call production servers -- servers made specifically to ensure that your app runs in the background and stays running all the time. Luckily, node has a few of these open source, my favorite being pm2. Check out this page, read the getting started instructions, install pm2 on your machine, and run your app. The process might look something like this:
npm install pm2 -g
cd path_to_my_app
pm2 start app.js
Since you ran it on the same port, your nginx configuration should remain the same, and your app should still be up if you visit the domain.
Phew, that was a lengthy process. Probably more than you expected - makes sense why something like heroku exists. So is this really worth it, running and maintaining the site yourself? I'd argue yes, and I host every one of the sites and apps I run like this. Here's why:
learning: I learn tons about how things work this way, and get much better at sysops.
cost: You can host like 20 sites on a single $5 digital ocean box. hosting is pennies.
control: Heroku sometimes goes down and it sucks because all you can do is wait for them to get it back up. If my site goes down, it's my fault and I can find out why and fix it.
I'm sure this answer was more than you ever expected to get here, but hope this helps! Getting from dev to sysops is a journey and sometimes can get really frustrating, but I promise once you have a good handle on things, it feel great and really helps your skills a lot.
Finally, I want to note that this is without a doubt an opinionated guide. There are tons of other tools and other ways to do these things -- the workflow I have here is just the way I prefer to do things. By all means feel free to tinker and suit the workflow to your needs once you have it under your belt! There are also lots of other details that could be added in here about setting up different databases, improving your deploy/restart flow, and securing your box a little more throughly. Would love to hear any feedback and add any of these pieces in if you or others are interested.
Google Platform has resources for Node developers. There is a tutorial shows you how to deploy a simple Node.js application to Google App Engine Managed VMs. Detail of the pricing is here.
Amazon Web Service (AWS) also has the similar service. Here is the tutorial. The AWS Free Tier is designed to enable you to get hands-on experience with AWS at no charge for 12 months after you sign up. You can investigate AWS as a platform for your Node.js application. Check it here.
We received access to the environment, but I now need to go through the process of securing it so that the previous vendor can no longer access it, or the Web applications running on it. This is a Linux box running Ubuntu. I know I need to change the following passwords:
SSH
FTP
MySQL
Control Panel Admin
Primary Application Admin
However, how do I really know I've completely secured the system using best practices, and am I missing anything else that I need to do other than just changing passwords?
3 simple steps
Backup configurations / source files from HTTP / SQL tables
Reinstall operating system
Follow standard hardening steps on fresh OS
Regardless of who it was, they could have installed any old crap on there (rootkits) that you can't configure away.
You will probably get more responses at serverfault.com on these kinds of questions.
There are several things you can do to secure SSH by editing your sshd_config file which is usually in /etc/ssh/:
Disable Root Logins
PermitRootLogin no
Change the ssh port from Port 22
Port 9222
Manually specifying which accounts can login
AllowUsers Andrew,Jane,Doe
SecurityFocus has a good article about securing MySQL, although it's a bit dated.
The best thing you could do would be reinstall and make sure when you bring over files from the old system to the new that it is just data, and not executables that could be nasty. If this is to much, changing all the passwords, and watching the logs for a few weeks, as well as playing with iptables to block former vendor. Also given that it could have a rootkit at the kernel level its probably good idea to change that out, and also watch traffic coming out of the box fro something that might be going to the vendor. It really is a hassle to take someone else's machine and say that is safe now, I would go as far to say it is nearly impossible.
side note. This isn't really programming related so probably shouldn't be on this site.
I have Apache running on a public-facing Debian server, and am a bit worried about the security of the installation. This is a machine that hosts several free-time hobby projects, so none of us who use the machine really have the time to constantly watch for upstream patches, stay aware of security issues, etc. But I would like to keep the bad guys out, or if they get in, keep them in a sandbox.
So what's the best, easy to set up, easy to maintain solution here? Is it easy to set up a user-mode linux sandbox on Debian? Or maybe a chroot jail? I'd like to have easy access to files inside the sadbox from the outside. This is one of those times where it becomes very clear to me that I'm a programmer, not a sysadmin. Any help would be much appreciated!
Chroot jails can be really insecure when you are running a complete sandbox environment. Attackers have complete access to kernel functionality and for example may mount drives to access the "host" system.
I would suggest that you use linux-vserver. You can see linux-vserver as an improved chroot jail with a complete debian installation inside. It is really fast since it is running within one single kernel, and all code is executed natively.
I personally use linux-vserver for seperation of all my services and there are only barely noticeable performance differences.
Have a look at the linux-vserver wiki for installation instructions.
regards, Dennis
I second what xardias says, but recommend OpenVZ instead.
It's similar to Linux-Vserver, so you might want to compare those two when going this route.
I've setup a webserver with a proxy http server (nginx), which then delegates traffic to different OpenVZ containers (based on hostname or requested path). Inside each container you can setup Apache or any other webserver (e.g. nginx, lighttpd, ..).
This way you don't have one Apache for everything, but could create a container for any subset of services (e.g. per project).
OpenVZ containers can quite easily get updated altogether ("for i in $(vzlist); do vzctl exec apt-get upgrade; done")
The files of the different containers are stored in the hardware node and therefore you can quite easily access them by SFTPing into the hardware node.
Apart from that you could add a public IP address to some of your containers, install SSH there and then access them directly from the container.
I've even heard from SSH proxies, so the extra public IP address might be unnecessary even in that case.
You could always set it up inside a virtual machine and keep an image of it, so you can re-roll it if need be. That way the server is abstracted from your actual computer, and any virus' or so forth are contained inside the virtual machine. As I said before, if you keep an image as a backup you can restore to your previous state quite easy.
To make sure it is said, CHRoot Jails are rarely a good idea it is, despite the intention, very easy to break out of, infact I have seen it done by users accidentally!
No offense, but if you don't have time to watch for security patches, and stay aware of security issues, you should be concerned, no matter what your setup. On the other hand, the mere fact that you're thinking about these issues sets you apart from the other 99.9% of owners of such machines. You're on the right path!
I find it astonishing that nobody mentioned mod_chroot and suEXEC, which are the basic things you should start with, and, most likely the only things you need.
You should use SELinux. I don't know how well it's supported on Debian; if it's not, just install a Centos 5.2 with SELinux enabled in a VM. Shouldn't be too much work, and much much safer than any amateur chrooting, which is not as safe as most people believe.
SELinux has a reputation for being difficult to admin, but if you're just running a webserver, that shouldn't be an issue. You might just have to do a few "sebool" to let httpd connect to the DB, but that's about it.
While all of the above are good suggestions, I also suggest adding a iptables rule to disallow unexpected outgoing network connections. Since the first thing most automated web exploits do is download the rest of their payload, preventing the network connection can slow the attacker down.
Some rules similar to these can be used (Beware, your webserver may need access to other protocols):
iptables --append OUTPUT -m owner --uid-owner apache -m state --state ESTABLISHED,RELATED --jump ACCEPT
iptables --append OUTPUT -m owner --uid-owner apache --protocol udp --destination-port 53 --jump ACCEPT
iptables --append OUTPUT -m owner --uid-owner apache --jump REJECT
If using Debian, debootstrap is your friend coupled with QEMU, Xen, OpenVZ, Lguest or a plethora of others.
Make a virtual machine. try something like vmware or qemu
What problem are you really trying to solve? If you care about what's on that server, you need to prevent intruders from getting into it. If you care about what intruders would do with your server, you need to restrict the capabilities of the server itself.
Neither of these problems could be solved with virtualization, without severly criplling the server itself. I think the real answer to your problem is this:
run an OS that provides you with an easy mechanism for OS updates.
use the vendor-supplied software.
backup everything often.