current setup,
2 master servers, 12 worker servers:
workers are connected to master through ssh-copy-id, masters and workers are writing data in redis-queues on masters.
issue i have been facing for past week is that redis is writing data in the authorized_keys file, i cant reproduce this issue or confirm which server is doing this.
I looked into the redis config file and i didn't find any setting that would make redis write in authorized_keys file.
Has anyone else faced this issue or similar, i clear the authorized keys file and it writes into it again.
Your servers are most probably being/have been attacked by a "cracker". While it is possible that attack is over, you should treat your servers as compromised and act accordingly. This is in all likelihood the same approach described by Salvatore Sanfilippo a.k.a antirez, Redis' author and security researcher in his past, in this blog post.
To prevent this type of attacks which use Redis as a vector, please refer to the instructions in the Securing Redis in the Quicktart page as a starting point and the Security page for more information.
More discussion is at /r/redis
Update: more ramblings on the same topic at https://redislabs.com/blog/3-critical-points-about-security
Related
I am running an EKS cluster. And I have a use case which I will explain below.
So, I am trying to create a scalable CTF (Capture the Flag). Now, the problem is - there are a few challenges in which the participants have to write a few files within the pod. Now, obviously, I don't want, another participant to have the remote session when the first user was writing the files within the pod. If that happens, the second user will automatically get the solution.
In order to avoid this problem, we thought of implementing a solution like "session anti-affinity", i.e., if a pod has a session with one user, the ingress should send the request to another pod, but we are not able to understand how to implement the solution.
Please help us out.
If you are just looking for session affinity solution using ingress, you need to enable proxy protocol first. Which will have information of source ip. In ingress you use the information to achieve affinity.
But the problem statement you had mentioned is kind of locking. at given point only one user has to serviced. Not sure session affinity will help in solving the problem.
I have a forum which contains groups, new groups are created all the time by users, currently I'm using node-cache with ttl to cache groups and it's content (posts, likes and comments).
The server worked great at the begging but the performance decreased when more people start using the app, so I decided to use the node.js Cluster module as the next step to improve performance.
The node-cache will cause a consistency problem, the same group could be cached in two workers, so if one of them changed, the other will not know (unless you do).
The first solution that came to my mind is using redis to store the whole group and it's content with the help of redis datatypes (sets and hash objects), but I don't know how efficient this could be.
The other solution is using redis to map requests to the correct worker, in this case the cached data is distributed randomly in workers, so when a worker receives a request that related to some group, he checks the group owner(the worker that holds this group instance in-memory) in redis and ask him to get the wanted data using node-ipc and then return it to the user.
Is there any problem with the first solution?
The second solution does not provides a fairness (if all the popular groups landed in the same worker), is there a solution for this?
Any suggestions?
Thanks in advance
I have just started using beanstalkd and pheanstalk and I am curious whether the following situation is a security issue (and if not, why not?):
When designing a queue that will contain jobs for an eventual worker script to pick up and preform SQL database queries, I asked a friend what I could do to prevent an online user from going into port 11300 of my server, and inserting a job into the queue himself and hence causing the job to be executed with malicious code. I was told that I could include a password inside the job being sent.
Though after some time passed, I recognized that someone could preform a few simple commands on a terminal and obtain the job inside the queue, and hence find the password, and then create jobs with the password included:
telnet thewebsitesipaddress 11300 //creating a telnet connection
list-tubes //finding which tubes are currently being used
use a_tube_found //using one of the tubes found
peek-ready //see whats inside one of the jobs and find the password
What could be done to make sure this does not happen and my queue doesn't get hacked / controlled?
Thanks in advance!
You can avoid those situations by placing beanstalkd behind a firewall or in a private network.
DigitalOcean (for example) offers such a service where you have a private network IP address which can be accessed only from servers of the same location.
We've been using beanstalkd in our company for more than a year, and we haven't had any of those issues yet.
I see, but what if the producer was a page called index.php, where when someone entered it, a job would be sent to the queue. In this situation, wouldn't the server have to be an open network?
The browser has no way to get in contact with the job server, it only access the resources /you/ allow them to, that is the view page. Only the back-end is allowed to access the job server. Also, if you build the web application in a certain way that the front-end is separated from the back-end, you're going to have even less potential security issues.
I've got an application that has got a couchDB as database. The database (Let's call it x.cloudant.com) is served on cloudant.com.
During development, I changed the account from x.cloudant.com to y.cloudant.com. After that (it may or may not have something to do with switching to the new account.) I got problems:
The traffic on cloudant.com spiked up really high (on one day I had
like over 700k requests). Mostly they were light HTTP requests
(GETs, HEADs). The user agent was from couchDB1.5 or couchDB1.6
Looking in the couchDB logs, I would get a lot of error messages: 500 HTTP messages, replication was canceled sind the database was shutdown, it couldn't connect to the database
My application would still sometimes connect to the old account (x.cloudant.com) even though I switched the account over a week ago. Meaning alongside replicating the data on the new account (y.cloudant.com) it also tries to replicate the data on my old account (x.cloudant.com).
My replication settings are the default settings. I want to reduce the amount of traffic on cloudant.con. Has anyone experienced the same issues? How did you solve it?
Hi I am setting up a cluster of machines using chef at offsite locations. If one of these machines was stolen, what damage can the attacker do to my chef-server or other nodes by having possession of chef-validator.pem ? What other things can they access through chef? Thanks!
This was one of the items discussed at a recent Foodfight episode on managing "secrets" in chef. Highly recommended watching:
http://foodfightshow.org/2013/07/secret-chef.html
The knife bootstrap operation uploads this key when initializing new chef clients. Possession of this key enables the client to register itself against your chef server. That is actually its only function, once the client is up and running the validation key is no longer needed.
But it can be abused.... As #cbl has pointed out, if an unauthorized 3rd party gets access to this key they can create new clients that can see everything on your chef server that normal clients can see. It can theoretically be used to create a Denial of Service attack on your chef server, by flooding it with registration requests.
The foodfight panel recommend a simple solution. Enable the chef-client cookbook on all nodes. It contains a "delete_validation" recipe that will remove the validation key and reduce your risk exposure.
The validator key is used to create new clients on the Chef Server.
Once the attacker gets hold of it, he can pretend he's a node in your infrastructure and have access to the same information any node has.
If you have sensitive information in an unencrypted data bag, for example, he'll have access to that.
Basically he'll be able to run any recipe from any cookbook, do searches (and have access to all your other nodes' attributes), read data bags, etc.
Keep that in mind when writing cookbooks and populating the other objects in the server. You could also somehow monitor the chef server for any suspicious client creation activity, and if you have any reason believe that the validator key has been stolen, revoke it and issue a new one.
It's probably a good idea to rotate the key periodically as well.
As of Chef 12.2.0 the validation key is no longer required:
https://blog.chef.io/2015/04/16/validatorless-bootstraps/
You can delete your validation key on your workstation and then knife will use your user credentials to create the node and client.
There's also some other nice features of this since whatever you supply for the run_list and environment is also applied to the node when it is created. No more relying on the first-boot.json file to be read by the chef-client and the run having to complete before the node.save creates the node at the end of the bootstrapping process.
Basically, chef-client uses 2 mode authentication for to the server :-
1) organization validator.pem and
2) user.pem
Unless and until there is the correct combination of these 2 keys. chef-client wont be able to authenticate with the chef server.
They can even connect any node to the chef server with the stolen key via the following steps.
Copying and pasting the validator key into /etc/chef folder on any machine
Creating client.rb file with the following details
log_location STDOUT
chef_server_url "https://api.chef.io/organizations/ORGNAME"
validation_client_name 'ORGNAME-validator'
validation_key '/etc/chef/validater.pem'
3: Run chef-client to connect to the chef server