Amazon Community AMI's + Security - linux

I'm looking to launch a linux EC2 instance.
Although I understand linux quite well my ability to security/harden a linux OS would undoubtedly leave me vulnerable to attach. eg: there are others who know more about linux security than me.
I'm looking to just run Linux, Apache & PHP5.
Are there any recommended Amazon AMI's that would come pre-harden running linux/apache/php or something similar to this?
Any advice would be greatly appreciated.
thankyou

Here is an older article regarding this (I haven't read it, but it's probably a good place to start): http://media.amazonwebservices.com/Whitepaper_Security_Best_Practices_2010.pdf
I would recommend a few best practices off the top of my head
1) Move to VPC, and control inbound and outbound access.
2a) Disable password authentication in SSH & only allow SSH from known IP's
2b) If you cannot limit SSH access via IP (due to roaming etc) allow password authentication and use google authenticator to provide multi-factor authentication.
3) Put an elastic load balancer in front of all public facing websites, and disable access to those servers except from the ELB
4) Create a central logging server, that holds your logs in a different location in case of attack.
5) Change all system passwords every 3 months
6) Employ an IDS, as a simple place to start I would recommend tripwire.
7) check for updates regularly (you can employ a monitoring system like Nagios w/NRPE to do this on all your servers) If you're not a security professional you probably don't have time to be reading bugtraq all day, so use the services provided by your OS (CentOS/RHEL it's yum)
8) Periodically (every quarter) do an external vulnerability assessment. You can learn and use nessus yourself (for non-corporate use) or use a third party such as qualys.
If you're concerned and in doubt, contract a security professional for an audit. This shouldn't be to cost prohibitive and can give you some great insight.

Actually, you can always relaunch your server from pre-configured AMI, if something happened.
It can be done very easy with Auto Scaling, for example. Use SSH Without a Password. Adjust your Security Groups accordingly. Here's good article on Securing Your EC2 Instance.

You have to understand 2 things:
Tight security make life hard for attackers as well as for you...
Security is an on-going task.
having your server secure at specific point in time don't say anything about the future.
New exploits and patches published every day, and lot of "development" acts render security unstable.
Solution?
You might consider services like https://pagodabox.com/
Where you are getting specific PHP resources without having to manage Linux/Security and so...
Edit:
Just to empathize...
Running Production system, where you are responsible for the on going security of the site, force you to do much more than starting up with a secure instance!
Otherwise, your site will become much less secure as time passed by (and as more people will learn about it)
As I see it (for a real production site), you have 2 options:
Get a security expert (in house or freelance) that will check your site regularly and will apply needed patches and so.
Get hosting service that will manage the security aspect for you.
I pointed to one service like that, where you can put your PHP code in and they will take care of everything else for you.
I would check this type of service for every production site that don't have the ability to get real periodically security checkup/fixes
Security is a very complex field... do not underestimate the risks...

One of the things I like most about using Amazon is how quickly and easily I can restrict my attack surface. I've made a prioritized list here. Near the end it gets a bit advanced.
Launch in a VPC
Put your webserver behind a loadbalancer ELB or ALB (terminate SSL there too)
Only allow web traffic from your load balancer
Create a restrictive security group. The only things allowed into your host should be incoming traffic from the load balancer and ssh from your IP (or your dhcp subnet if your ISP does not offer a static address)
Enable automatic security updates
yum-cron (amazon linux)
or unattended-upgrades (ubuntu)
Harden ssh
disallow root login and default amazon accounts
disallow password login in favor of ssh keys
Lock down your aws root account with 2fa and a long password.
Create and use IAM credentials for day-to-day operations
If you have a data layer deploy encrypted RDS and put it in a private subnet
Explore connecting to RDS with IAM credentials (no more db password saved in a conf file)
Check out yubikey for 2fa ssh.
Advanced: For larger or more important deployments you might consider using something like ThreatStack. They can warn you of AWS misconfig (s3 bucket containing customer data open to the world?), security vulnerabilities in packages on your hosts. They also alert on signals of compromise and keep a command log which is useful for investigating security incidents.

Related

How to securely host file on RHEL server and enable download for user

I have programmed an application that users can use to process genome data. This application relies on a 10GB database file, that users have to download in order to run the application. At the moment, I have stored this file on Google Drive, but the download bandwith is limited, so if a number of users download the file on a certain day, it will not work for others and they will get errors running the application.
My solution would be to host the file on our research server, create a user that only has access rights to this folder and nothing else, and make the file downloadable from the server via scp within the application (which is open source) through that user.
My question now is, is this safe to do or are people potentially able to hack into our server? If this method would be a security risk, what would be a better way to provide this file?
Thank you in advance!
Aloha
You can setup something like free Seafile https://www.seafile.com/en/home/, or ask the admin to set it up for you which is pretty secure like a self hosted google drive with 2fa authentication.
Another nice and easy tool is Filebrowser on github (https://github.com/filebrowser/filebrowser)
I would not really advice giving people shell/scp access inside your network.
And hosting anything inside a company network is in general not wisest idea, there is a always a risk involved.
I would setup a Seafile/filebrowser solution at a cheap rented server outside your network and upload it there. Or if you have a small pc left set it up in a DMZ Zone, a zone that has special access restrictions inside your company.
You want to use SSH (scp) as a transportation and authentication method for file hosting. It's possible to keep this safe with caution. For example, GitHub uses SSH for transport when providing git access with the git+ssh protocol.
Now for the caution part, if you haven't done it before, it's not a trivial task.
The proper way to achieve this would be set up an isolated SSH server in a chroot environment, and set up an SSH user on this isolated SSH instance only (not a user in the system that is added by eg useradd). Then you can add the files that's absolutely necessary to the chroot, and provide SSH access to users.
(Nowadays you might want to consider using Linux filesystem namespaces, if applicable, to replace chroot, but I'm not sure on this.)
As for other options, setting up a simple Nginx server for static file hosting might be a lot easier, provided you have some understanding of HTTP and TLS. There're lots of writings on the Internet about this.
Both ways, if you are to expose your server to the Internet or Intranet, you need to make sure of firewalling. Consider to learn about nftables or firewalld or the like, if you haven't already.
SSH is reasonably safe. Always keep software up-to-date.
Set up an sftp-only user with chrooted directory. In /etc/ssh/sshd_config:
Match User MyUser
ChrootDirectory /var/ssh/chroot
ForceCommand internal-sftp
AllowTcpForwarding no
PermitTunnel no
X11Forwarding no
This user will not get a shell (because of internal-sftp), and cannot see files outside of /var/ssh/chroot.
Use a certificate client-side, additional to password.
Good description of the setup process for certificates:
https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server
Your solution is moderately safe.
A better solution is to put it on a server accessible via sftp, behind a password, but also encrypt the file: in this way you introduce a double layer of protection.
On a Linux server you should be able to use a tool like gpg to encrypt your file.
Next you share the decryption key with your partners using a secure channel with e.g. an end2end encrypted messaging software.

NodeJS TLS/TCP server in need of an external firewall

Problem:
I have an AWS EC2 instance running FreeBSD. In there, I'm running a NodeJS TLS/TCP server. I'd like to create a set of rules (in my NodeJS application) to be able to individually block IP addresses programmatically based on a few logical conditions.
I'd like to run an external (not on the same machine/instance) firewall or load-balancer, that I can control from NodeJS programmatically, such that when certain conditions are given, I can block a specific remote-address(IP) before it reaches the NodeJS instance.
Things I've tried:
I have initially looked into nginx as an option, running it on a second instance, and placing my NodeJS server behind it, but after skimming through the NGINX
Cookbook
Advanced Recipes for High Performance
Load Balancing I've learned that only the NGINX Plus (the paid version) allows for remote/API control & customization. While I believe that paying $3500/license is not too much (considering all NGINX Plus' features), I simply can not afford to buy it at this point in time; in addition the only feature I'd be using (at this point) would be the remote API control and the IP address blocking.
My second thought was to go with the AWS/ELB (elastic-load-balancer) by integrating AWS' SDK into my project. That sounded feasible, unfortunately, after reading a few forum threads and part of their documentation (unless I'm mistaken) it seems these two features I need are not available on the AWS/ELB. AWS seems to offer an entire different service called WAF that I honestly don't understand very well (both as a service and from a feature-stand-point).
I have also (briefly) looked into CloudFlare, as it was recommended in one of the posts, here on Sackoverflow, though I can't really tell if their firewall would allow this level of (remote) control.
Question:
What are my options? What would you guys recommend I did?
I think Nginx provide such kind of functionality please refer to link
If you want to block an IP with Node TCP you can just edit a nginx config file and deny IP address.
Frankly speaking, If I were you, I would use AWS WAF but if you don’t want to use it, you can simply use Node JS
In Node JS You should have a global array variable where you will store all blocked IP addresses and upon connection, you will check whether connected host IP is in blocked IP variable. However there occurs a problem when machine or application is restarted, you will lose all information about blocked IP-s. So as a solution to that you can just setup Redis (It is key-value database but there are also other datatypes) DB and store blocked IP-s there. Inasmuch as Redis DB is in RAM all interaction with DB will be instantly and as long as machine or node is restarted, Redis makes a backup on hard drive and it syncs from it and continue to work in RAM with old databases.

Host securely other people on apache2

For the context : I'm a student and I must do a project with some other people of my class. My role is to prepare them a web server that each one can use and access from anywhere. I plan to host everything on a dedicated server that I already have to avoid additional cost and give to each people a subdomain that will be redirected with VirtualHosts. They will be able to send files to the server with a SFTP server (openssh), they will get an account per person and it will be chrooted to their virtualhost directory.
My main problem : Will this be secure ? I mean, if one of the user set an easy password or just do anything risky, can someone access the other's people virtualhost or even the host dedicated machine ? I already thought about .htaccess and they will be deactivated. Is there another way to get out of an apache virtualhost ?
Things to note : they will have apache, php and an access to a mysql (or maybe mariadb, I don't know for now) database. So, they may be able to upload some old, unsecure code. Some of these users are not very educated to cybersecurity.
The server is a Ubuntu 16.04 LTS.
Thanks for the advices,
If you limit their access to only their own home directory, that's a good start.
A good layer of security would also be to implement 2FA, check out Duo Mobile, you can implement it for SSH logins (or need more details, eg. what options do they have to login into the server?)
If the users are not very educated in cybersecurity as you mentioned, it will be difficult for them to escape the virtual host they have access to.
Although i need more details such as each virtual host will have a separate database or it will be talking to a central database? also, for a paranoid measure, consider where the server is hosted. There are lots of variables that can be affirmed from what you described, but it is best to keep the server on its own network with nothing critical in the same subnet. Just in case.

Ubuntu Server Default Security on AWS

I'm totally new to Linux but have been developing on windows platforms for years. I'd like to set up an Ubuntu server on AWS to house Node.js. If I run through the default install for Ubuntu server, load Node.js and start up a simple Node.js server on port 80 is there anything else I need to do to secure the server?
There are many ways to harden a server, I will only name two that are absolutely necessary.
On Ubuntu server there might or might not be activated already, but you should always check.
Activate a firewall
The simplest way to handle iptables rules for firewall is ufw. Type in your terminal:
ufw default deny # Silently deny access to all ports except those mentioned below
ufw allow 22/tcp # Allow access to SSH port
ufw allow 80/tcp # Allow access to HTTP port
ufw enable # Enable firewall
ufw reload # Be sure that everything was loaded right
Be sure to allow SSH, otherwise you will be locked outside your server. Also note that UFW (and iptables) allows to allow or deny single IP addresses and subnetworks.
Force pubkey login in SSH, disable root login and use fail2ban
Password login is weak if an attacker can try accessing your server anytime, unless you use a long and impossible-to-remember pseudo-random sequence. SSH allows to handle authentication via public/private keys, which are more robust and far less predictable, being generated from a random seed.
First generate your own pair of keys and add your public key to ~/.ssh/authorized_keys on the server, so that you are not locking yourself out. After, and only after, have a look at /etc/ssh/sshd_config. The two relevant options are:
PermitRootLogin no
PasswordAuthentication no
This way, the attacker must guess the username of the administrator before even trying the password, because they cannot login as root. You don't need to access as root to get root privileges, you will be able to elevate from your user account with su or sudo.
Finally, use fail2ban to temporarily ban by IP address after a certain number of wrong attempts to authenticate (so that attackers cannot brute force that easily). I said temporarily because if an attacker spoofs your legitimate IP, he/she can perform a DoS on you.
After applying all changes, restart the daemon with:
service ssh restart
I will repeat it, be careful, check everything or you will lock yourself out of your server.
Other remarks
A default Debian/Ubuntu installation is secure enough to be exposed on the Internet without fearing any major flaw. Still, you should always review security settings, gather information about software you are deploying on the server and periodically inspect logs searching for abnormal patterns.
Other tools that might be useful are Apparmor, providing MAC profiles for most system services (Postfix, HTTPd...), LXC for sandboxing, chroots, etc... It depends on how critical the infrastructure is.
I think this topic is too wide for a SO answer.
The best place to start would be probably to start mapping the security best practices and the required knowledge for you to gain.
Knowledge Centers:
CSA - Cloud Security Alliance: The place to have full understanding of what is required to run a server in the cloud.
OWASP - Open Web Application Security Project. Deals with your web app. Take a look at the top 10 list
PCI - The payment card industry regulator. Though you are probably not storing credit cards - this is a good source to learn. Here is an intro.
Now you have several approaches to deal with it:
Enterprise approach - learn, plan, implement, test, create ongoing processes.
Guerrilla approach - Iterative: find the lowest hanging fruit and handle it.
Hybrid - combine some properties from both approaches.
Regarding your lowest hanging fruit / most critical attack vectors:
Your Perimeter aka Proper Firewall Configuration - since you are running on AWS you should consider using their powerful network based FW (aka Security Groups). For simple use-cases you can use their console UI. For more complex setups you might want to add dedicated security management services such as Dome9 that could assist with management of both network based and host based security policies.
Utilize WAF (Web application firewall) - consider either using mod-security - host based WAF that can be installed on your nginx that (hopefully) sits in front of your nodejs. OR alternatively use WAF as a service by Incapsula or Cloudflare
Setup proper centralized logging. Compare Splunk Cloud, Sumo Logic, LogEntries and Loggly to find your service of choice.
Harden your server authentication and accounts (too long to cover here)

How safe is fresh Centos 6 Standard Server Installation?

Is installing Centos using standard installation for webserver relative safe? (without considering the CMS safety and only for Wordpress). The contents are:
- Virtualmin & Webmin:
- APC caching
- Apache, MySQL and Php
Everything is installed with default settings.
I installed Centos server at home and access it 100% from local network.
If it is not safe then what is the minimum requirement for safety?
'Safe' is too relative a term really. CentOS 6, Virtualmin and Webmin all have security bugs filed against them, some of which can even be exploited automatically by scripts and packages like Metasploit.
That said, no system will ever be perfectly secure unless you bury it underground with no net connection, so here are some good initial steps to take to improve security a little:
Turn off services and daemons that you don't need. For instance, it could be that you won't be using FTP, and will use SFTP for file transfer. If so, turn off the ones you aren't using.
Enforce a policy of unique and secure passwords of a decent length
install system updates, especially security updates.
Modify IPtables settings to disallow access to unused ports. Look into further iptables settings that can help
Consider key-based logins, 2 or 3 factor authentication etc. and weigh the pros and cons (google authenticator PAM module is very easy to install, for example).
That's a good start off, a key thing is to keep an eye on the server, try to monitor if unusual bandwidth, or logins are being used.
No box is a fortress, but you can at the very least discourage opportunists.

Resources