I want to secure my clusters with SecurityContext.RunAsUser. The question is simple : which user should I use ?
I know the user with the UID 1001 (for example) inside a container is the user with the UID 1001 on the host. So should I create a user with UID 1001 called pod_user on all my hosts and force all pods to use this user ? Or should I verify there is no user with this UID on all my hosts ? I didn't find best practices guide about this.
I have the same type of question about the Dockerfile : should I declare a user in the Dockerfile with a specific UID and reuse it in SecurityContext.RunAsUser ? Some official images are running with a specific hardcoded user in Dockerfile, and others are running as nobody. I found this interesting post but there is no clear answer: Kubernetes: Linux user management.
My ideal recommendation here would be:
Your Dockerfile RUN adduser to create a non-root user, and switches the USER to that user after doing all of the software installation
You don't need to explicitly specify a user ID in the Kubernetes configuration
It doesn't matter what that user name or numeric ID is, or if it's the same user ID that exists in another container or on the host, just so long as the numeric uid isn't 0
The only place there's a practical problem is if the container tries to store data in a volume. Again, the ideal case would be to avoid this entirely; prefer using a database or cloud storage if that's a possibility.
If you do need to use a local volume, you can specifically configure a security policy with a supplemental fsGroup:. That is an arbitrary numeric group ID (not user ID) which will own the files in the volume, and also will be added to the group list for the main container process.
apiVersion: v1
kind: Pod
spec:
securityContext:
fsGroup: 12345 # or any numeric gid
Don't use hostPath volumes at all: you can't guarantee that the same contents will be on every node if the pod gets rescheduled, and if the node fails, content that lives only on that node will be lost.
Related
Using terraform and AWS I've created a Postgres RDS DB within a VPC. During creation, the superuser is created, but it seems like a bad idea to use this user inside my app to run queries.
I'd like to create an additional access-limited DB user during the terraform apply phase after the DB has been created. The only solutions I've found expect the DB to be accessible outside the VPC. For security, it also seems like a bad idea to make the DB accessible outside the VPC.
The first thing that comes to mind is to have terraform launch an EC2 instance that creates the user as part of the userdata script and then promptly terminates. This seems like a pretty heavy solution and terraform would create the instance each time terraform apply is run unless the instance is not terminated at the end of the script.
Another option is to pass the superuser and limited user credentials to the server that runs migrations and have that server create the user as necessary. This, however, would mean that the server would have access to the superuser and could do some nefarious things if someone got access to it.
Are there other common patterns for solving this? Do people just use the superuser for everything or open the DB to the outside world?
I am using nodejs to write a file in a shared drive and it is working fine in my local machine, however after deploying the above code in Openshift the file is not creating and it is because OpenShift is not able to access the folder. Below is my code:
writeFile() {
const sharedFolderPath = "\\server\folder";
fs.writeFile(sharedFolderPath, templatePath, (err) => {
if (err) {
console.error(err);
} else {
console.info("file created successfully");
}
})
}
How to configure share folder with credentials in Openshift so that my code could write the file?
If this is server side, and you are using OpenShift S2I builder for NodeJS, you can only write to directories under /opt/app-root.
If you need data to survive a restart of the container, then you need to use a persistent volume. You can then mount the volume anywhere so long as doesn't overlap a directory which had other stuff in it you need to still access. Using persistent volumes which are ReadWriteOnce means you will need though to switch deployment strategy from default of Rolling to Recreate.
By default, OpenShift runs images with arbitrary user ids, e.g. 1000010000, so if it works locally, but not on OpenShift, it could be that the directory is not writable for that user.
The following is from the OpenShift Guidelines for creating images (emphasis is mine):
Support Arbitrary User IDs
In order to support running containers with volumes mounted in a secure fashion, images should be capable of being run as any arbitrary user ID. When OpenShift mounts volumes for a container, it configures the volume so it can only be written to be a particular user ID, and then runs the image using that same user ID. This ensures the volume is only accessible to the appropriate container, but requires the image be able to run as an arbitrary user ID.
To accomplish this, directories that must be written to by processes in the image should be world-writable. In addition, the processes running in the container must not listen on privileged ports (ports below 1024).
So you might need to add a RUN chmod 777 /server/folder to your Dockerfile to make that directory world-writable.
I'm trying to stay sane while configuring Bacula Server on my virtual CentOS Linux release 7.3.1611 to do a basic local backup job.
I prepared all the configurations I found necessary in the conf-files and prepared the mysql database accordingly.
When I want to start a job (local backup for now) I enter the following commands in bconsole:
*Connecting to Director 127.0.0.1:9101
1000 OK: bacula-dir Version: 5.2.13 (19 February 2013)
Enter a period to cancel a command.
*label
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
Automatically selected Storage: File
Enter new Volume name: MyVolume
Defined Pools:
1: Default
2: File
3: Scratch
Select the Pool (1-3): 2
This returns
Connecting to Storage daemon File at 127.0.0.1:9101 ...
Failed to connect to Storage daemon.
Do not forget to mount the drive!!!
You have messages.
where the message is:
12-Sep 12:05 bacula-dir JobId 0: Fatal error: authenticate.c:120 Director unable to authenticate with Storage daemon at "127.0.0.1:9101". Possible causes:
Passwords or names not the same or
Maximum Concurrent Jobs exceeded on the SD or
SD networking messed up (restart daemon).
Please see http://www.bacula.org/en/rel-manual/Bacula_Freque_Asked_Questi.html#SECTION00260000000000000000 for help.
I double and triple checked all the conf files for integrity and names and passwords. I don't know where to further look for the error.
I will gladly post any parts of the conf files but don't want to blow up this question right away if it might not be necessary. Thank you for any hints.
It might help someone sometime who made the same mistake as I:
After looking through manual page after manual page I found it was my own mistake. I had (for a reason I don't precisely recall, I guess to trouble-shoot another issue before) set all ports to 9101 - for the director, the file-daemon and the storage daemon.
So I assume the bacula components must have blocked each other's communication on port 9101. After resetting the default ports like 9102, 9103 according to the manual, it worked and I can now backup locally.
You have to add director's name from the backup server, edit /etc/bacula/bacula-fd.conf on remote client, see "List Directors who are permitted to contact this File daemon":
Director {
Name = BackupServerName-dir
Password = "use *-dir password from the same file"
}
Trying to give servers production access to more ops people in our team.
Only issue is the DB access concern. For most tasks ops do not need DB access and only limited people should have such access.
Let's say we have two servers:
Application Server:
tomcat (app needs access to DB server)
DB server:
Database
So ultimately we would like to give root access to the "application server" so that ops can do all sorts maintenance on the server but not be able to gain access to the DB server. This means I cannot just store DB pass in a configuration files for the app to read for example.
Are there well known practices that would solve issue like that?
First any credential that the 'Application Server' has to access the 'DB Server' should be considered handed over to anyone with root on the Application Server. Since you say that DB access must be limited you cannot give ops complete root on the Application Server.
But do not lose hope, there is sudo.
sudo can give users or groups access to root power, but only for limited purposes. Unfortunately setting up sudo correctly can be tricky to prevent subshells and wildcards from getting full root, but it is possible.
There are too many permutations for a general answer beyond sudo without additional information about your use case. A great reference for sudo with exactly this use case in mind is 'Sudo Mastery' by Michael W. Lucas.
As a new user of SSH and the Amazon AWS EC2 Dashboard, I am trying to test to see whether I can, in one instance, save data onto a volume, then access that data from another instance by adding the volume to the instance (after terminating the first instance).
When I create the first instance, the AMI is "Amazon Linux AMI 2014.03.2 (HVM)" and the family is "general purpose" with EBS storage only. I automatically assign a public IP address to the instance. I configure the root volume so that it does NOT delete on termination.
As soon as the instance is launched, I open up PuTTY and set the host name to the instance's Public IP Address under Port 22, and authenticate using a private key saved onto the disc that I have already generated earlier.
Upon signing into the instance, I create a text file by typing the following code:
echo "testing">test.txt
I then confirm that the text "testing" is saved to the file "test.txt":
less test.txt
I see the text "testing", thus confirming that it is saved to the file. (I am assuming at this point that it is saved onto the volume, but I am not entirely sure.)
I then proceed to terminate the instance. I launch another one using the same AMI, same instance type, and a different public IP address. In addition to the root volume, I attempt to add the volume that was used as the root volume for the previous instance. (Oddly enough, the snapchat IDs for the previous volume and the root volume of the new instance are identical.) In addition, I use the same tag instance, the same security group and the same key pair as the previous instance.
I open up PuTTY again, this time using the Public IP Address of the new instance, but still using the same private key and port used for the previous instance. Opening logging in, I type:
less test.txt
but I am greeted with this message:
test.txt: No such file or directory
Is there any advice that anyone can offer me regarding this issue. Is it even possible to store a text file onto a volume? If so, am I performing this operation incorrectly?
As the secondary volume has the same UUID and the Amazon Linux used UUID based identification for root, then there might be a chance that the secondary volume was taken as the root volume. This may be the reason why there would be a mess up in choosing the root volume and the initial attempt to find test.txt would fail.
The reboot might have allowed it to take a different order which is why you were able to find it.