How safe is fresh Centos 6 Standard Server Installation? - linux

Is installing Centos using standard installation for webserver relative safe? (without considering the CMS safety and only for Wordpress). The contents are:
- Virtualmin & Webmin:
- APC caching
- Apache, MySQL and Php
Everything is installed with default settings.
I installed Centos server at home and access it 100% from local network.
If it is not safe then what is the minimum requirement for safety?

'Safe' is too relative a term really. CentOS 6, Virtualmin and Webmin all have security bugs filed against them, some of which can even be exploited automatically by scripts and packages like Metasploit.
That said, no system will ever be perfectly secure unless you bury it underground with no net connection, so here are some good initial steps to take to improve security a little:
Turn off services and daemons that you don't need. For instance, it could be that you won't be using FTP, and will use SFTP for file transfer. If so, turn off the ones you aren't using.
Enforce a policy of unique and secure passwords of a decent length
install system updates, especially security updates.
Modify IPtables settings to disallow access to unused ports. Look into further iptables settings that can help
Consider key-based logins, 2 or 3 factor authentication etc. and weigh the pros and cons (google authenticator PAM module is very easy to install, for example).
That's a good start off, a key thing is to keep an eye on the server, try to monitor if unusual bandwidth, or logins are being used.
No box is a fortress, but you can at the very least discourage opportunists.

Related

Security minecraft server mods containing Java code

I am running a Java minecraft server on my Linux server. I have been asked to install some mods (e.g. data packs) on the server which appear to contain Java code written by a third party other than Mojang.
Is this Java code restricted in what it can do, or can it run any arbitrary code it likes (e.g. read /etc/passwd, open TCP ports, claim huge amounts of memory, etc.)?
In other words, how risky are minecraft mods containing Java code?
This is a very good question. I actually was wondering that too. I have spent a bit of time looking at the binaries of minecraft and the minecraft server spigot. I assume we both use the Java Edition.
First of all, the Java code, once you run it on a host, can do anything. The only mechanism that can prevent that and protect the user from the developer in Java, is the security manager. Once you turn on the security manager (which is an opt-in mechanism) you have the ability to define a set of rules that Java will obey like e.g. it will not write into directories you don't allow it too.
So the question is: is minecraft using the security manager per default. I am 99% sure it does not. No one is using the security manager because it is a pain to configure it right and things stop working every time you get it wrong (you know that an applications uses the security manager because you face problems with policy misconfiguration every now and then).
Running minecraft is made by running an exe. I would not now where to turn the security manager on even if I would like too. There is a bit of hope with the spigot server. You can install the security manager with -Djava.security.manager and input your policy with -Djava.security.policy==my.policy. But getting the policy right will be pain. I will try to look into it though when I have a free week or so.
Minecraft mods can act with the full permissions of the user running Minecraft. If you don't trust the author of a mod, then there's basically two approaches to safely use it:
Audit the source code yourself, and then compile the mod yourself so you know it matches the binary
Run Minecraft in a sandbox such as a VM, or as a heavily restricted user.

Ubuntu Server Default Security on AWS

I'm totally new to Linux but have been developing on windows platforms for years. I'd like to set up an Ubuntu server on AWS to house Node.js. If I run through the default install for Ubuntu server, load Node.js and start up a simple Node.js server on port 80 is there anything else I need to do to secure the server?
There are many ways to harden a server, I will only name two that are absolutely necessary.
On Ubuntu server there might or might not be activated already, but you should always check.
Activate a firewall
The simplest way to handle iptables rules for firewall is ufw. Type in your terminal:
ufw default deny # Silently deny access to all ports except those mentioned below
ufw allow 22/tcp # Allow access to SSH port
ufw allow 80/tcp # Allow access to HTTP port
ufw enable # Enable firewall
ufw reload # Be sure that everything was loaded right
Be sure to allow SSH, otherwise you will be locked outside your server. Also note that UFW (and iptables) allows to allow or deny single IP addresses and subnetworks.
Force pubkey login in SSH, disable root login and use fail2ban
Password login is weak if an attacker can try accessing your server anytime, unless you use a long and impossible-to-remember pseudo-random sequence. SSH allows to handle authentication via public/private keys, which are more robust and far less predictable, being generated from a random seed.
First generate your own pair of keys and add your public key to ~/.ssh/authorized_keys on the server, so that you are not locking yourself out. After, and only after, have a look at /etc/ssh/sshd_config. The two relevant options are:
PermitRootLogin no
PasswordAuthentication no
This way, the attacker must guess the username of the administrator before even trying the password, because they cannot login as root. You don't need to access as root to get root privileges, you will be able to elevate from your user account with su or sudo.
Finally, use fail2ban to temporarily ban by IP address after a certain number of wrong attempts to authenticate (so that attackers cannot brute force that easily). I said temporarily because if an attacker spoofs your legitimate IP, he/she can perform a DoS on you.
After applying all changes, restart the daemon with:
service ssh restart
I will repeat it, be careful, check everything or you will lock yourself out of your server.
Other remarks
A default Debian/Ubuntu installation is secure enough to be exposed on the Internet without fearing any major flaw. Still, you should always review security settings, gather information about software you are deploying on the server and periodically inspect logs searching for abnormal patterns.
Other tools that might be useful are Apparmor, providing MAC profiles for most system services (Postfix, HTTPd...), LXC for sandboxing, chroots, etc... It depends on how critical the infrastructure is.
I think this topic is too wide for a SO answer.
The best place to start would be probably to start mapping the security best practices and the required knowledge for you to gain.
Knowledge Centers:
CSA - Cloud Security Alliance: The place to have full understanding of what is required to run a server in the cloud.
OWASP - Open Web Application Security Project. Deals with your web app. Take a look at the top 10 list
PCI - The payment card industry regulator. Though you are probably not storing credit cards - this is a good source to learn. Here is an intro.
Now you have several approaches to deal with it:
Enterprise approach - learn, plan, implement, test, create ongoing processes.
Guerrilla approach - Iterative: find the lowest hanging fruit and handle it.
Hybrid - combine some properties from both approaches.
Regarding your lowest hanging fruit / most critical attack vectors:
Your Perimeter aka Proper Firewall Configuration - since you are running on AWS you should consider using their powerful network based FW (aka Security Groups). For simple use-cases you can use their console UI. For more complex setups you might want to add dedicated security management services such as Dome9 that could assist with management of both network based and host based security policies.
Utilize WAF (Web application firewall) - consider either using mod-security - host based WAF that can be installed on your nginx that (hopefully) sits in front of your nodejs. OR alternatively use WAF as a service by Incapsula or Cloudflare
Setup proper centralized logging. Compare Splunk Cloud, Sumo Logic, LogEntries and Loggly to find your service of choice.
Harden your server authentication and accounts (too long to cover here)

Amazon Community AMI's + Security

I'm looking to launch a linux EC2 instance.
Although I understand linux quite well my ability to security/harden a linux OS would undoubtedly leave me vulnerable to attach. eg: there are others who know more about linux security than me.
I'm looking to just run Linux, Apache & PHP5.
Are there any recommended Amazon AMI's that would come pre-harden running linux/apache/php or something similar to this?
Any advice would be greatly appreciated.
thankyou
Here is an older article regarding this (I haven't read it, but it's probably a good place to start): http://media.amazonwebservices.com/Whitepaper_Security_Best_Practices_2010.pdf
I would recommend a few best practices off the top of my head
1) Move to VPC, and control inbound and outbound access.
2a) Disable password authentication in SSH & only allow SSH from known IP's
2b) If you cannot limit SSH access via IP (due to roaming etc) allow password authentication and use google authenticator to provide multi-factor authentication.
3) Put an elastic load balancer in front of all public facing websites, and disable access to those servers except from the ELB
4) Create a central logging server, that holds your logs in a different location in case of attack.
5) Change all system passwords every 3 months
6) Employ an IDS, as a simple place to start I would recommend tripwire.
7) check for updates regularly (you can employ a monitoring system like Nagios w/NRPE to do this on all your servers) If you're not a security professional you probably don't have time to be reading bugtraq all day, so use the services provided by your OS (CentOS/RHEL it's yum)
8) Periodically (every quarter) do an external vulnerability assessment. You can learn and use nessus yourself (for non-corporate use) or use a third party such as qualys.
If you're concerned and in doubt, contract a security professional for an audit. This shouldn't be to cost prohibitive and can give you some great insight.
Actually, you can always relaunch your server from pre-configured AMI, if something happened.
It can be done very easy with Auto Scaling, for example. Use SSH Without a Password. Adjust your Security Groups accordingly. Here's good article on Securing Your EC2 Instance.
You have to understand 2 things:
Tight security make life hard for attackers as well as for you...
Security is an on-going task.
having your server secure at specific point in time don't say anything about the future.
New exploits and patches published every day, and lot of "development" acts render security unstable.
Solution?
You might consider services like https://pagodabox.com/
Where you are getting specific PHP resources without having to manage Linux/Security and so...
Edit:
Just to empathize...
Running Production system, where you are responsible for the on going security of the site, force you to do much more than starting up with a secure instance!
Otherwise, your site will become much less secure as time passed by (and as more people will learn about it)
As I see it (for a real production site), you have 2 options:
Get a security expert (in house or freelance) that will check your site regularly and will apply needed patches and so.
Get hosting service that will manage the security aspect for you.
I pointed to one service like that, where you can put your PHP code in and they will take care of everything else for you.
I would check this type of service for every production site that don't have the ability to get real periodically security checkup/fixes
Security is a very complex field... do not underestimate the risks...
One of the things I like most about using Amazon is how quickly and easily I can restrict my attack surface. I've made a prioritized list here. Near the end it gets a bit advanced.
Launch in a VPC
Put your webserver behind a loadbalancer ELB or ALB (terminate SSL there too)
Only allow web traffic from your load balancer
Create a restrictive security group. The only things allowed into your host should be incoming traffic from the load balancer and ssh from your IP (or your dhcp subnet if your ISP does not offer a static address)
Enable automatic security updates
yum-cron (amazon linux)
or unattended-upgrades (ubuntu)
Harden ssh
disallow root login and default amazon accounts
disallow password login in favor of ssh keys
Lock down your aws root account with 2fa and a long password.
Create and use IAM credentials for day-to-day operations
If you have a data layer deploy encrypted RDS and put it in a private subnet
Explore connecting to RDS with IAM credentials (no more db password saved in a conf file)
Check out yubikey for 2fa ssh.
Advanced: For larger or more important deployments you might consider using something like ThreatStack. They can warn you of AWS misconfig (s3 bucket containing customer data open to the world?), security vulnerabilities in packages on your hosts. They also alert on signals of compromise and keep a command log which is useful for investigating security incidents.

How do I secure a production server after inheriting it from the previous development vendor?

We received access to the environment, but I now need to go through the process of securing it so that the previous vendor can no longer access it, or the Web applications running on it. This is a Linux box running Ubuntu. I know I need to change the following passwords:
SSH
FTP
MySQL
Control Panel Admin
Primary Application Admin
However, how do I really know I've completely secured the system using best practices, and am I missing anything else that I need to do other than just changing passwords?
3 simple steps
Backup configurations / source files from HTTP / SQL tables
Reinstall operating system
Follow standard hardening steps on fresh OS
Regardless of who it was, they could have installed any old crap on there (rootkits) that you can't configure away.
You will probably get more responses at serverfault.com on these kinds of questions.
There are several things you can do to secure SSH by editing your sshd_config file which is usually in /etc/ssh/:
Disable Root Logins
PermitRootLogin no
Change the ssh port from Port 22
Port 9222
Manually specifying which accounts can login
AllowUsers Andrew,Jane,Doe
SecurityFocus has a good article about securing MySQL, although it's a bit dated.
The best thing you could do would be reinstall and make sure when you bring over files from the old system to the new that it is just data, and not executables that could be nasty. If this is to much, changing all the passwords, and watching the logs for a few weeks, as well as playing with iptables to block former vendor. Also given that it could have a rootkit at the kernel level its probably good idea to change that out, and also watch traffic coming out of the box fro something that might be going to the vendor. It really is a hassle to take someone else's machine and say that is safe now, I would go as far to say it is nearly impossible.
side note. This isn't really programming related so probably shouldn't be on this site.

Securing a linux webserver for public access

I'd like to set up a cheap Linux box as a web server to host a variety of web technologies (PHP & Java EE come to mind, but I'd like to experiment with Ruby or Python in the future as well).
I'm fairly versed in setting up Tomcat to run on Linux for serving up Java EE applications, but I'd like to be able to open this server up, even just so I can create some tools I can use while I am working in the office. All the experience I've had with configuring Java EE sites has all been for intranet applications where we were told not to focus on securing the pages for external users.
What is your advice on setting up a personal Linux web server in a secure enough way to open it up for external traffic?
This article has some of the best ways to lock things down:
http://www.petefreitag.com/item/505.cfm
Some highlights:
Make sure no one can browse the directories
Make sure only root has write privileges to everything, and only root has read privileges to certain config files
Run mod_security
The article also takes some pointers from this book:
Apache Securiy (O'Reilly Press)
As far as distros, I've run Debain and Ubuntu, but it just depends on how much you want to do. I ran Debian with no X and just ssh'd into it whenever i needed anything. That is a simple way to keep overhead down. Or Ubuntu has some nice GUI things that make it easy to control Apache/MySQL/PHP.
It's important to follow security best practices wherever possible, but you don't want to make things unduly difficult for yourself or lose sleep worrying about keeping up with the latest exploits. In my experience, there are two key things that can help keep your personal server secure enough to throw up on the internet while retaining your sanity:
1) Security through obscurity
Needless to say, relying on this in the 'real world' is a bad idea and not to be entertained. But that's because in the real world, baddies know what's there and that there's loot to be had.
On a personal server, the majority of 'attacks' you'll suffer will simply be automated sweeps from machines that have already been compromised, looking for default installations of products known to be vulnerable. If your server doesn't offer up anything enticing on the default ports or in the default locations, the automated attacker will move on. Therefore, if you're going to run a ssh server, put it on a non-standard port (>1024) and it's likely it will never be found. If you can get away with this technique for your web server then great, shift that to an obscure port too.
2) Package management
Don't compile and install Apache or sshd from source yourself unless you absolutely have to. If you do, you're taking on the responsibility of keeping up-to-date with the latest security patches. Let the nice package maintainers from Linux distros such as Debian or Ubuntu do the work for you. Install from the distro's precompiled packages, and staying current becomes a matter of issuing the occasional apt-get update && apt-get -u dist-upgrade command, or using whatever fancy GUI tool Ubuntu provides.
One thing you should be sure to consider is what ports are open to the world. I personally just open port 22 for SSH and port 123 for ntpd. But if you open port 80 (http) or ftp make sure you learn to know at least what you are serving to the world and who can do what with that. I don't know a lot about ftp, but there are millions of great Apache tutorials just a Google search away.
Bit-Tech.Net ran a couple of articles on how to setup a home server using linux. Here are the links:
Article 1
Article 2
Hope those are of some help.
#svrist mentioned EC2. EC2 provides an API for opening and closing ports remotely. This way, you can keep your box running. If you need to give a demo from a coffee shop or a client's office, you can grab your IP and add it to the ACL.
Its safe and secure if you keep your voice down about it (i.e., rarely will someone come after your home server if you're just hosting a glorified webroot on a home connection) and your wits up about your configuration (i.e., avoid using root for everything, make sure you keep your software up to date).
On that note, albeit this thread will potentially dwindle down to just flaming, my suggestion for your personal server is to stick to anything Ubuntu (get Ubuntu Server here); in my experience, the quickest to get answers from whence asking questions on forums (not sure what to say about uptake though).
My home server security BTW kinda benefits (I think, or I like to think) from not having a static IP (runs on DynDNS).
Good luck!
/mp
Be careful about opening the SSH port to the wild. If you do, make sure to disable root logins (you can always su or sudo once you get in) and consider more aggressive authentication methods within reason. I saw a huge dictionary attack in my server logs one weekend going after my SSH server from a DynDNS home IP server.
That being said, it's really awesome to be able to get to your home shell from work or away... and adding on the fact that you can use SFTP over the same port, I couldn't imagine life without it. =)
You could consider an EC2 instance from Amazon. That way you can easily test out "stuff" without messing with production. And only pay for the space,time and bandwidth you use.
If you do run a Linux server from home, install ossec on it for a nice lightweight IDS that works really well.
[EDIT]
As a side note, make sure that you do not run afoul of your ISP's Acceptable Use Policy and that they allow incoming connections on standard ports. The ISP I used to work for had it written in their terms that you could be disconnected for running servers over port 80/25 unless you were on a business-class account. While we didn't actively block those ports (we didn't care unless it was causing a problem) some ISPs don't allow any traffic over port 80 or 25 so you will have to use alternate ports.
If you're going to do this, spend a bit of money and at the least buy a dedicated router/firewall with a separate DMZ port. You'll want to firewall off your internal network from your server so that when (not if!) your web server is compromised, your internal network isn't immediately vulnerable as well.
There are plenty of ways to do this that will work just fine. I would usually jsut use a .htaccess file. Quick to set up and secure enough . Probably not the best option but it works for me. I wouldn't put my credit card numbers behind it but other than that I dont really care.
Wow, you're opening up a can of worms as soon as you start opening anything up to external traffic. Keep in mind that what you consider an experimental server, almost like a sacrificial lamb, is also easy pickings for people looking to do bad things with your network and resources.
Your whole approach to an externally-available server should be very conservative and thorough. It starts with simple things like firewall policies, includes the underlying OS (keeping it patched, configuring it for security, etc.) and involves every layer of every stack you'll be using. There isn't a simple answer or recipe, I'm afraid.
If you want to experiment, you'll do much better to keep the server private and use a VPN if you need to work on it remotely.

Resources