I am running an EKS cluster. And I have a use case which I will explain below.
So, I am trying to create a scalable CTF (Capture the Flag). Now, the problem is - there are a few challenges in which the participants have to write a few files within the pod. Now, obviously, I don't want, another participant to have the remote session when the first user was writing the files within the pod. If that happens, the second user will automatically get the solution.
In order to avoid this problem, we thought of implementing a solution like "session anti-affinity", i.e., if a pod has a session with one user, the ingress should send the request to another pod, but we are not able to understand how to implement the solution.
Please help us out.
If you are just looking for session affinity solution using ingress, you need to enable proxy protocol first. Which will have information of source ip. In ingress you use the information to achieve affinity.
But the problem statement you had mentioned is kind of locking. at given point only one user has to serviced. Not sure session affinity will help in solving the problem.
Related
I am aware that there is a probe available at /hazelcast/health/ready at port 5701.
However I need to do this progamatically through code as I am using embedded hazelcast deployed on a kubernetes cluster and all communication should pass through the main application (that means hazelcast cannot expose that endpoint, using http requests through localhost would not suffice). I tried looking into the documentation but I found no help in this.
Only thing that I found is to use instance.getServer().getPartitionService().isLocalMemberSafe() but I got no evidence that this is effectively the same as checking the readiness probe.
Any help would be appreciated, thanks!
The exact logic for /ready endpoint is:
node.isRunning() && node.getNodeExtension().isStartCompleted()
I guess you can't use exactly the same from the code, but fairly good approximations are:
instance.getLifecycleService().isRunning() (the only difference is that it won't wait with being ready for joining other members)
instance.getPartitionService().isClusterSafe() (the difference is that it will wait for all the Hazelcast migration to finish)
You can use any of them. If you want to be really sure that Hazelcast member can receive the traffic when its ready, then the second option is totally safe.
I have a forum which contains groups, new groups are created all the time by users, currently I'm using node-cache with ttl to cache groups and it's content (posts, likes and comments).
The server worked great at the begging but the performance decreased when more people start using the app, so I decided to use the node.js Cluster module as the next step to improve performance.
The node-cache will cause a consistency problem, the same group could be cached in two workers, so if one of them changed, the other will not know (unless you do).
The first solution that came to my mind is using redis to store the whole group and it's content with the help of redis datatypes (sets and hash objects), but I don't know how efficient this could be.
The other solution is using redis to map requests to the correct worker, in this case the cached data is distributed randomly in workers, so when a worker receives a request that related to some group, he checks the group owner(the worker that holds this group instance in-memory) in redis and ask him to get the wanted data using node-ipc and then return it to the user.
Is there any problem with the first solution?
The second solution does not provides a fairness (if all the popular groups landed in the same worker), is there a solution for this?
Any suggestions?
Thanks in advance
I have just started using beanstalkd and pheanstalk and I am curious whether the following situation is a security issue (and if not, why not?):
When designing a queue that will contain jobs for an eventual worker script to pick up and preform SQL database queries, I asked a friend what I could do to prevent an online user from going into port 11300 of my server, and inserting a job into the queue himself and hence causing the job to be executed with malicious code. I was told that I could include a password inside the job being sent.
Though after some time passed, I recognized that someone could preform a few simple commands on a terminal and obtain the job inside the queue, and hence find the password, and then create jobs with the password included:
telnet thewebsitesipaddress 11300 //creating a telnet connection
list-tubes //finding which tubes are currently being used
use a_tube_found //using one of the tubes found
peek-ready //see whats inside one of the jobs and find the password
What could be done to make sure this does not happen and my queue doesn't get hacked / controlled?
Thanks in advance!
You can avoid those situations by placing beanstalkd behind a firewall or in a private network.
DigitalOcean (for example) offers such a service where you have a private network IP address which can be accessed only from servers of the same location.
We've been using beanstalkd in our company for more than a year, and we haven't had any of those issues yet.
I see, but what if the producer was a page called index.php, where when someone entered it, a job would be sent to the queue. In this situation, wouldn't the server have to be an open network?
The browser has no way to get in contact with the job server, it only access the resources /you/ allow them to, that is the view page. Only the back-end is allowed to access the job server. Also, if you build the web application in a certain way that the front-end is separated from the back-end, you're going to have even less potential security issues.
current setup,
2 master servers, 12 worker servers:
workers are connected to master through ssh-copy-id, masters and workers are writing data in redis-queues on masters.
issue i have been facing for past week is that redis is writing data in the authorized_keys file, i cant reproduce this issue or confirm which server is doing this.
I looked into the redis config file and i didn't find any setting that would make redis write in authorized_keys file.
Has anyone else faced this issue or similar, i clear the authorized keys file and it writes into it again.
Your servers are most probably being/have been attacked by a "cracker". While it is possible that attack is over, you should treat your servers as compromised and act accordingly. This is in all likelihood the same approach described by Salvatore Sanfilippo a.k.a antirez, Redis' author and security researcher in his past, in this blog post.
To prevent this type of attacks which use Redis as a vector, please refer to the instructions in the Securing Redis in the Quicktart page as a starting point and the Security page for more information.
More discussion is at /r/redis
Update: more ramblings on the same topic at https://redislabs.com/blog/3-critical-points-about-security
I need to create multi node web server that will be allow to control number of nodes in real time and change process UID and GUID.
For example at start server starts 5 workers and pushes them into workers pool.
When the server gets the new request it searches for free workers, sets UID or GUID if needed, and gives it the request to proces. In case if there is no free workers, server will create new one, set GUID or UID, also pushes it into pool and so on.
Can you suggest me how it can be implemented?
I've tried this example http://nodejs.ru/385 but it doesn't allow to control the number of workers, so I decided that there must be other solution but I can't find it.
If you have some examples or links that will help me to resolve this issue write me please.
I guess you are looking for this: http://learnboost.github.com/cluster/
I don't think cluster will do it for you.
What you want is to use one process per request.
Have in mind that this can be very innefficient, and node is designed to work around those types of worker processing, but if you really must do it, then you must do it.
On the other hand, node is very good at handling processes, so you need to keep a process pool, which is easily accomplished by using node internal child_process.spawn API.
Also, you will need a way for you to communicate to the worker process.
I suggest opening a unix-domain socket and sending the client connection file descriptor, so you can delegate that connection into the new worker.
Also, you will need to handle edge-cases for timeouts, etc.
https://github.com/pgte/fugue I use this.