I'm designing a game server service using Kubernetes. I decided that the most suitable volume structure for me was hostPath. As a result of my research, I saw that the use of hostPath would cause security problems.
I will create a folder for each pod. And I will link this folder to the pod with hostPath. Users will not have root privileges in the pod. They will be able to start java process with non-root user inside.
Does it really create a security vulnerability for me?
Related
I am doing a small system which deploys azure container groups via Rest. On the container groups I have multiple instances that are load balanced via Traefik. For example I have a container group with two containers plus a traefik container that redirects requests to the other two containers.
The problem with this solution is being able to access docker.sock on the traefik container. Without docker.sock Traefik is blind, and cannot detect the existing containers.
I have tried a couple of approaches, but with no success.
Is it possible to access docker.sock on an azure container instance?
Thanks for your support.
We don't enable access to docker.sock from ACI because the service is built to provide a managed container solution, so access to the underlying host and docker running isn't available.
Depending on your scenario, we'll be bringing in LB support for Azure VNETs later this year that should be able to help. Hopefully you can find alternative routes but feel free to share details and I'm happy to help if possible.
I'm trying to deploy a simple WordPress example (WordPress & MySQL DB) on Microsofts new Azure Container Service with Mesos & Marathon as the underlying orchestration platform. I already ran this on the services offered by Google (Kubernetes) and Amazon (ECS) and thought it would be an easy task on ACS as well.
I have my Mesos cluster deployed and everything is up and running. Deploying the MySQL container isn't a problem either, but when I deploy my WordPress container I can't get a connection to my MySQL container. I think this might be because MySQL runs on a different Mesos agent?
What I tried so far:
Using the Mesos DNS to get ahold of the MySQL container host (for now I don't really care which container I get ahold of). I set the WORDPRESS_DB_HOST environment var to mysql.marathon.mesos and specified the host of MySQL container as suggested here.
I created a new rule for the Agent Load Balancer and a Probe for port 3306 in Azure itself, this worked but seems like a very complicated way to achieve something so simple. In Kubernetes and ECS links can be simply defined by using the container name as hostname.
An other question that came up, what difference is their in Marathon between setting the Port in the Port Mappings Section and in the Optional Settings section. (See screenshot attached)
Update: If I ssh into the master node than I can dig by using mysql.marathon.mesos, how ever I can't get a connection to work from within an other container (in my case the wordpress container).
So there are essentially two questions here: one around stateful services on Marathon, the other around port management. Let me first clarify that neither has to do anything with Azure or ACS in the first place, they are both Marathon-related.
Q1: Stateful services
Depending on your requirements (development/testing or prod) you can either use Marathon's persistent volumes feature (simple but no automatic failover/HA for the data) or, since you are on Azure, a robust solution like I showed here (essentially mounting a file share).
Q2: Ports
The port mapping you see in the Marathon UI screen shot is only relevant if you launch a Docker image and want to explicitly map container ports to host ports in BRIDGE mode, see the docs for details.
In case i have a container as a service platform serving containers to different tenants' users. And a single host can be hosting containers of different tenants. how can i allow access over HTTP while making sure that a user can access only his tenant's containers?
Does Kubernetes solve that? Is there a more lightweight solution?
DCHQ absolutely solves this problem.
Docker daemon runs as root. Once we add a user to the Docker group, he can launch any docker container even privileged containers. This seems to be a serious security issue.
Is there a way of limiting some users in the Docker group to not be able to run privileged containers?
There is currently no way to restrict access to privileged containers. This is why, for example, Fedora has dropped the docker group (because granting access to docker is effectively just like granting passwordless sudo access).
If you want to provide "docker-as-a-service", you probably want to put something like in front of it like Kubernetes, which provides only indirect access to the Docker API and can be configured to permit or deny the use of privileged containers.
Many Docker users are enforcing and restricting SElinux by changing the domain of Docker users "SecurityContext"
We are in the process of setting up our IT infrastructure on Amazon EC2.
Assume a setup along the lines of:
X production servers
Y staging servers
Log collation and Monitoring Server
Build Server
Obviously we have a need to have various servers talk to each other. A new build needs to be scp'd over to a staging server. The Log collator needs to pull logs from production servers. We are quickly realizing we are running into trouble managing access keys. Each server has its own key pair and possibly its own security group. We are ending up copying *.pem files over from server to server kind of making a mockery of security. The build server has the access keys of the staging servers in order to connect via ssh and push a new build. The staging servers similarly has access keys of the production instances (gulp!)
I did some extensive searching on the net but couldnt really find anyone talking about a sensible way to manage this issue. How are people with a setup similar to ours handling this issue? We know our current way of working is wrong. The question is - what is the right way ?
Appreciate your help!
Thanks
[Update]
Our situation is complicated by the fact that at least the build server needs to be accessible from an external server (specifically, github). We are using Jenkins and the post commit hook needs a publicly accessible URL. The bastion approach suggested by #rook fails in this situation.
A very good method of handling access to a collection of EC2 instances is using a Bastion Host.
All machines you use on EC2 should disallow SSH access to the open internet, except for the Bastion Host. Create a new security policy called "Bastion Host", and only allow port 22 incoming from the bastion to all other EC2 instances. All keys used by your EC2 collection are housed on the bastion host. Each user has their own account to the bastion host. These users should authenticate to the bastion using a password protected key file. Once they login they should have access to whatever keys they need to do their job. When someone is fired you remove their user account to the bastion. If a user copies keys from the bastion, it won't matter because they can't login unless they are first logged into the bastion.
Create two set of keypairs, one for your staging servers and one for your production servers. You can give you developers the staging keys and keep the production keys private.
I would put the new builds on to S3 and have a perl script running on the boxes to pull the lastest code from your S3 buckets and install them on to the respective servers. This way, you dont have to manually scp all the builds into it everytime. You can also automate this process using some sort of continuous build automation tools that would build and dump the build on to you S3 buckets respectively. Hope this helps..