Redis Configuration, need to allow remote connections but need security - security

I need my about two EC2 instances which need to connection to an outside redis server. The redis conf is binded to 0.0.0.0 to allow this. Is there some sort of a password/auth system for redis connections? I need to way to allow my servers to connect to remote redis but block everyone else.
I know I can do this with iptables by whitelisting only those EC2 ip addresses for port 6379 but I was wondering if there was a proper way to do this.

Redis sports a very basic form of authentication via password protection. To enable it, you'll need to add/uncomment the requirepass directive in your configuration file and have your clients authenticate with the AUTH command.
Another approach would be to use an extra layer of security such as a secure proxy. Here's an howto: http://redislabs.com/blog/using-stunnel-to-secure-redis.

Related

Allow only one device to access the server through SSH

I have Linux server with SSH enabled and I want to allow only my mobile phone to access it from anywhere and any network.
I tried to make a Firewall rule to allow specific IP but the thing is my mobile's IP Changes continuously.
So what is the procedure to perform this task?
I tried firewall rule to prevent all IPs.
I tried Fail2ban to ban all IPs that enter wrong password but it
blocks huge number of IPs which will affect system's performance.
It'll be difficult unless you figure out a way to expose an api via https from your sever that can change deny/allow rule when your mobile IP changes. I personally don't know of anything like that... I just use openvpn for my mobile to connect and I ssh to remote systems.

Is it safe to have the TCP port 2376 open by docker-machine (generic driver) to the internet?

I'm experimenting with Docker Machine, trying to set it up on an already existing machine using docker-machine create --driver generic. I noticed that it reconfigures the firewall so that port 2376 is open to the Internet. Does it also set up proper authentication, or is there a risk that I'm exposing root access to this machine as a side effect?
By default, docker-machine configures mutual TLS (mTLS) to both encrypt the communication, and verify the client certificate to limit access. From the docker-machine documentation:
As part of the process of creation, Docker Machine installs Docker and configures it with some sensible defaults. For instance, it allows connection from the outside world over TCP with TLS-based encryption and defaults to AUFS as the storage driver when available.
You should see environment variables configured by docker-machine for the DOCKER_HOST and DOCKER_TLS_VERIFY to point to a remote host and use mTLS certificates. Typically, port 2375 is an unencrypted and unsecured port that should never be used, and 2376 is configured with at least TLS, and hopefully mTLS (without the mutual part to verify clients, security is non existent). For more details on what it takes to configure this, see https://docs.docker.com/engine/security/https/
All that being said, docker with mTLS is roughly the same security as SSH with only key pair logins allowed. Considering the access it grants to the host, I personally don't leave either of these exposed to the internet despite being fairly secure. When possible, I use IP whitelists, VPNs, or other measures to limit access. But many may feel relatively safe leaving these ports exposed.
Unless you are using certificates to secure the socket, it's prone for attacks. See more info here.
In the past, some of my test cloud instances were compromised and turned into bitcoin miners. In one instance, since there were keys available on that particular host, the attacker could use those keys to create new cloud instances.

AWS, NodeJS - Connecting app to Mongodb on another EC2 instance

I am trying to connect my app, running on one EC2 instance, to MongoDB, running on another EC2 instance. I'm pretty sure the problem is in the security settings, but I'm not quite sure how to handle that.
First off, my app's instance is in an autoscaling group that sits behind an ELB. The inbound security settings for the instance and ELB allow access to port 80 from anywhere, as well as all traffic from its own security group.
The EC2 instance that runs Mongo is able to take connections if the security group for that instance accepts all inbound traffic from anywhere. Any other configuration that I've tried causes the app to say that it cannot make a connection with the remote address. I've set rules to accept inbound traffic from all security groups that I have, but it only seems to work when I allow all traffic from anywhere.
Also, my db instance is set up with an elastic ip. Should I have this instance behind an ELB as well?
So my questions are these:
1) How can I securely make connections to my EC2 instance running mongo?
2) In terms of architecture, does it make sense to run my database this way, or should I have this behind a load balancer as well?
This issue is tripping me up a lot more than I thought it would, so any help would be appreciated.
NOTE
I have also set the bind_ip=0.0.0.0 in /etc/mongo.conf
Your issue is that you are using the public elastic IP to connect to your database server from your other servers. This means that the connection is going out to the internet and back into your VPC, which presents the following issues:
Security issues due to the data transmission not being contained within your VPC
Network latency issues
Your database server's security group can't identify the security group of the inbound connections
Get rid of the elastic IP on the MongoDB server, there is no need for it unless you plan to connect to it from outside your VPC. Modify your servers to use the private internal IP address assigned to your database server when creating connections to it. Finally, lock your security group back down to only allow access to the DB from your other security group(s).
Optional: Create a private hosted zone in Route53, with an A record pointing to your database server's private IP address, then use that hostname instead of the internal IP address.

JAVA - Can we ignore SSL verification for local network

Can we ignore SSL verification for local network. My case is-
I have two applications deployed in a system. These two applications cannot communicate through internet, due to some security constraints. the two applications can communicate using their private IPs. But the certificate issued by CA is valid only for the public IP (accessible from internet), so when they tries to do a HTTP connection, it throws a Subject Alternative Name invalid exception.
I cannot use alternate certificate.
Please suggest if we can configure Java / JREs of the applications to ignore SSL validation?
Please suggest any alternate solution, if any.
It sounds to me like you might just be better off using HTTP on the local network.
If you need transport layer security on your LAN, you can probably use a VPN or SSH tunnel instead. And it sounds to me like you don't really need this, as you're OK with ignoring SSL handshake errors, which makes using SSL in the first place kind of moot.
You can set up your servers to listen on two ports, one for external requests over HTTPS, and one for internal requests on HTTP.
You can either set up your firewalls so that HTTP is only available from LAN IPs, or alternatively only listen on localhost and use a VPN or SSH tunnel to the target server and do the requests via the tunnel.

Create AWS EC2 Security Group for Intercommunicating Redis

I will be having Redis in a master-slave configuration where each Redis node is in a separate EC2 instance. Since each Redis slave will need to communicate with the master, I need to add the Redis' own security group ID as a source. However, I'm unsure as to what protocol Redis will be using. Should I set up the Security group rule as a Custom TCP with select access to ports, or should it just be the "All TCP" rule?
A custom tcp rule on port 6379 and the security group as source is enough

Resources