How can I secure memcached/beanstalkd in a hostile cloud environment? - security

Here's how my servers (in Amazon EC2) would look like:
Server 1 Server 2 Server 3
__________________________ _____________________ _______________
| Cloud Monitor Daemon | | Memcached daemon | | beanstalkd |
| | | Memcached daemon | ________________
| "Hostile" user process | / | Memcached daemon |
| "Hostile" user process | / | Memcached daemon |
| "Hostile" user process | / | Memcached daemon |
| "Hostile" user process | / ______________________
| "Hostile" user process | /
__________________________
There's multiple user processes on one server. Each user then has their own memcached instance running on a (separate) server (with many other memcached instances). Without any sort of security (as it is by default), user process B could guess the port of the memcached instance of user A and access it. How can I secure this system so that user C could only access memcached instance C and no other (even though the memcached instances are all on the same server)? My user should not have to do anything to make use of the security (just continue connecting to the memcached port as usual), it should all happen automatically by the system.
Also, the Cloud Monitor Daemon on the server along with the "hostile" user processes needs to be able to access a remote beanstalkd server. Beanstalkd has no authentication either, so if my Monitor Daemon can access beanstalkd, so can the "hostile" user processes, and I don't want that. How can I secure this part?

I mentioned some tips on securing memcached in a blog post recently. For your case, SASL will probably help a lot.
I don't know if beanstalk ever got SASL support, but that's kind of a different app.
You could build a VPN or enable IPSEC to control access to all services on all machines at the node level, of course.

You can start beanstalkd on Server3 local IP (127.0.0.1)
and then use SSH Tunnels from the Server 1 to Server 3.
Combine it with inetd and ssh-keys to be failsafe.

I ended up going with plain old iptables. Allows me to do per-uid rules and is very easy to configure programatically. Most importantly, the users don't need to be involved in the process, they can continue using the standard protocols and not have to deal with authentication, and iptables will drop any "naughty" packets that are going where they shouldn't.

A couple weeks ago Amazon has announced the Amazon VPC (Virtual Private Cloud) which we are using to secure memcached and beanstalkd.
Works great! Seriously reccomend it; one less overhead to have to deal with ourselves.

Related

How to find number of http(s) connections (TCP) opened by my node JS micro-service (using axios KeepAlive for http) config, in GKE k8s environment?

Problem Description/Context
I have a nodeJS-based application using Axios to make HTTP requests (Outbound REST API calls) against a web service (say https://any.example.restapis.com). And these HTTP requests occasionally used to take > 1-minute latency. After some debugging - when we tried httpsAgent property to keep the HTTP connections live (persistent) it did the trick and now the APIs are taking < 1 second and the application is working OK. Basically, my understanding is with this property the TCP connections used by the HTTP calls are persistent now and the httpsAgent is opening multiple socket connections against the web service (i.e; it's keeping the connections alive based on default configs and opening multiple TCP connections based on the load as required - basically maintaining a pool of connections)
httpsAgent: new https.Agent({ keepAlive: true }),
Question
We are not yet sending the full traffic 100% to the micro-service (just 1%). So I would like to understand in detail what is happening underneath to make sure the fix is indeed complete and my micro-service will scale to full traffic.
So, can anyone please let me know after SSH into the pod's container how I can check if my node JS application is indeed making number of TCP (socket) connections against the web service rather than just using single TCP connection but keeping it alive (I tried to use netstat -atp command like below - however I'm not able to make the connection). So, it will great if anyone help me with how to check the number of TCP connections made by my micro-service.
// example cmd -
// Looking at cmds like netstat, lsof as they may (hoping!) give me details that I want!
netstat -atp | grep <my process ID>
In a microservices architecture, the number of server to server connections increases dramatically compared to alternative setups. Interactions which would traditionally have been an in-memory process in one application now often rely on remote calls to other REST based services over HTTP, meaning it is more important than ever to ensure these remote calls are both fast and efficient.
The netstat command is used to show network status.
# netstat -at : To list all tcp ports.
# netstat -lt : To list only the listening tcp ports.
It is used more for problem determination than for performance measurement. However, the netstat command can be used to determine the amount of traffic on the network to ascertain whether performance problems are due to network congestion.

AWS EC2 Error: The site can't be reached - ec2.us-west-1.compute.amazonaws.com took too late to respond. Deploy NodeJS

I currently have an EC2 instance up and running with Amazon Linux running and transferred my project (which contains both React/NodeJS/Express) onto the EC2 instance via SFTP using FileZilla.
For the EC2's Security Groups, I opened a port for 3000 (protocol: tcp, source: 0.0.0.0/0), which is how my Express is defined as well.
So I sshed into EC2 instance and ran the project's Express, and sees it listening to port 3000 within the terminal. But once I hit the Public DNS with ec2...us-west-1.compute.amazonaws.com:3000, it says The site can't be reached - ec2...us-west-1.compute.amazonaws.com took too late to respond.
What could be the issue and how can I go about from here to connect to it?
Thank you in advance and will upvote/accept answer.
Just check if your Node.js server is running on the EC2 instance.
Debugging:
Check first if It working locally properly.
Check for the node.js server in EC2.
sudo netstat -tulpn | grep :3000
try to run server with --verbose flag i.e npm run server --verbose
it will show logs of the server while starting.
Check for the security group Setting for the EC2 instance.
try to connect with the ip:port i.e 35.2..:3000
If still it not working and response taking long time.
that means some other service is running on the same port.
try this in ec2:
sudo killall -9 node
npm run server
And connect with using IP(54.4.5.*:3000) or public DNS (http://ec2...us-west-1.compute.amazonaws.com:3000).
Hope It will help :)
You may be encountering an issue with outbound traffic. You may be inside a company's network, either physically connected or VPN'd in. In some instances, your VPN isnt set up to handle split traffic, so you must abide by your company's outbound restrictions.
In a situation like this, you would want to use a proxy to access your site. when locking down your security group, make sure you use your proxy's public IP (not your company's).
Usually, when we have connectivity issues, it is something basic or a firewall. I assume you have checked whether a firewall is running on either end, eg. iptables -L -n. Also, any protocol analyzer like wireshark or tcpdump would tell you where packets to port 3000 are visible.

How to secure Amazon EC2 with Tomcat7 and mySQL

I'm very new with EC2. I have Tomcat 7 and MySQL installed. The security group I have setup is
Custom TCP Rule
TCP
8080
SSH
TCP
22
MYSQL
TCP
3306
For Outbound
It is for all Traffic.
I got a report from Amazon said as below
Instance ID: i-1e42db06
AWS ID: 772517067349
Reported Activity: DoS
What should I do to stop it?
And I also got a bill as below
$0.090 per GB - first 10 TB / month data transfer out beyond the global free tier 637.521 GB
Please advice me steps to protect my instance in EC2
Updated: with email from Amazon
We've received a report(s) that your EC2 instance(s)
AWS ID: 772517067349
Instance Id: i-1e42db06
IP Address: 172.31.25.202
has been implicated in activity that resembles a Denial of Service attack against remote hosts; please review the information provided below about the activity.
Please take action to stop the reported activity and reply directly to this email with details of the corrective actions you have taken. If you do not consider the activity described in these reports to be abusive, please reply to this email with details of your use case.
If you're unaware of this activity, it's possible that your environment has been compromised by an external attacker, or a vulnerability is allowing your machine to be used in a way that it was not intended
Check the instance monitoring panel in EC2 to see your traffic and check the logs in your server to see what kind of traffic is is.
Mysql has just had a zero day exploit that you could be vulnerable to, and SSH has had quite a few critical bugs lately, so it's not only your firewall settings that you need to take into account here, but you need to secure the services behind those ports too.
Besides this, if your web application deployed in tomcat contains a vulnerability, you are open to all sort of attacks, many of which will be reflected in an increase of traffic. Tomcat itself, of course, must also be up to date and properly secured.
There's just too many things that could be happening to enumerate, but if the "transfer out" as it's worded in your question refers to outbound traffic, you could have been compromised and have the server be part of a botnet. It's not clear above if they are reporting to you that you are suffering a DoS or you are trying a DoS.

Display stats from remote linux server

My intention is to display stats like Load avg and RAM usage from a remote server onto an LCD panel 24/7. I would like the information to be updated once every 1-3 seconds.
Other threads have suggested using SSH commands to retrieve the information.
ssh root#192.168.1.252 uptime && cat /proc/meminfo
Is using SSH suitable for my purposes, my concern is that
my log files may be bloated because of all the login attempts
overhead of setting up and tearing down an SSH connection every few second.
Is there any such package out there or do I have to code it myself? I would prefer one that keeps the connection open to reduce overhead. I do not require encryption as both servers are on LAN.
Thanks in advance.
Several things to note:
Don't use root if you don't need to. For uptime and cat /proc/meminfo you certainly don't need root. Use another user.
Note the difference between these two:
ssh user#hostname uptime && cat /proc/meminfo
ssh user#hostname 'uptime && cat /proc/meminfo'
The first one will execute cat /proc/meminfo on your local machine, the second will execute it on the remote. I think you want to use the second version. (You want the CPU info of the remote machine, not your local machine, do you?)
You can use connection multiplexing to hit two birds with one stone: reduce the overhead of establishing new connections and avoid polluting the server log. To do this, add a configuration like this in your ~/.ssh/config file:
Host somename
User the_username
Hostname the_hostname
ControlMaster auto
ControlPath ~/.ssh/master-somename
You can choose any somename, it's like an alias. With this setting, you can connect to the server simply as:
ssh somename
While this remote session is still alive (until you logout), you can open new connections from another terminal, and they will reuse the existing connection, bypassing authentication and effectively eliminating the overhead of new connections.
This is actually a common trick when working with slow remote servers where establishing new connections is a noticeable overhead. In fact I use this setting to apply it to all remote servers I work with:
Host *
ControlMaster auto
ControlPath ~/.ssh/master-%r#%h:%p
I usually recommend this trick for everyone.

How can I tunnel Telnet connections between two interfaces?

I have the following network:
PC1 --|¯¯¯¯¯¯¯¯¯| |¯¯¯¯¯¯¯¯¯|-- board2
PC2 --| gateway |-- (eth0) [server] (eth1) --| gateway |
PC3 --|_________| |_________|-- board1
As you can see, the server (running Linux) has 2 interfaces so that the PCs and embedded boards are not on the same subnetwork.
I would like to develop a simple application (or use an existing one) that tunnels incomming Telnet connections from eth0 (using a specific port) to boards through eth1.
I don't have root privileges on the server, but I have a regular Unix account. I don't want the PCs to actually "log in" to the server; I just want them to pass through it to connect to the boards. The server has Python, PHP and Perl installed.
I want to support multiple connections. The port number could be used to forward the connection to the right board (say I have 10 boards, then the tunnel listens on ports 3000 to 3009).
I can imagine a simple Web application to do that, but I want a native Telnet connection that will support CTRL+C and all that stuff and allow you to use any terminal emulator on the PCs (i.e. PuTTY or another one), which run Windows.
Any help would be much appreciated.
It sounds like you're looking for a TCP proxy. A proxy accepts connections on one port, connects to another ip/port, and then passes traffic back and forth.
If you have xinetd on your system you already have what you need. The redirect directive for a service causes xinetd to open a connection to another host/port and act as a proxy. See this document for an example.
balance is another very simple proxy tool that will do what you need. This is probably easier to run as a non-root user, especially since it's configured entirely on the command line.
Since you're not root on your server you'll need to run these on ports > 1024, but otherwise you should be all set.

Resources