Openshift how to access shared folders - node.js

I am using nodejs to write a file in a shared drive and it is working fine in my local machine, however after deploying the above code in Openshift the file is not creating and it is because OpenShift is not able to access the folder. Below is my code:
writeFile() {
const sharedFolderPath = "\\server\folder";
fs.writeFile(sharedFolderPath, templatePath, (err) => {
if (err) {
console.error(err);
} else {
console.info("file created successfully");
}
})
}
How to configure share folder with credentials in Openshift so that my code could write the file?

If this is server side, and you are using OpenShift S2I builder for NodeJS, you can only write to directories under /opt/app-root.
If you need data to survive a restart of the container, then you need to use a persistent volume. You can then mount the volume anywhere so long as doesn't overlap a directory which had other stuff in it you need to still access. Using persistent volumes which are ReadWriteOnce means you will need though to switch deployment strategy from default of Rolling to Recreate.

By default, OpenShift runs images with arbitrary user ids, e.g. 1000010000, so if it works locally, but not on OpenShift, it could be that the directory is not writable for that user.
The following is from the OpenShift Guidelines for creating images (emphasis is mine):
Support Arbitrary User IDs
In order to support running containers with volumes mounted in a secure fashion, images should be capable of being run as any arbitrary user ID. When OpenShift mounts volumes for a container, it configures the volume so it can only be written to be a particular user ID, and then runs the image using that same user ID. This ensures the volume is only accessible to the appropriate container, but requires the image be able to run as an arbitrary user ID.
To accomplish this, directories that must be written to by processes in the image should be world-writable. In addition, the processes running in the container must not listen on privileged ports (ports below 1024).
So you might need to add a RUN chmod 777 /server/folder to your Dockerfile to make that directory world-writable.

Related

Terraform/RDS: Create DB users after creation inside VPC

Using terraform and AWS I've created a Postgres RDS DB within a VPC. During creation, the superuser is created, but it seems like a bad idea to use this user inside my app to run queries.
I'd like to create an additional access-limited DB user during the terraform apply phase after the DB has been created. The only solutions I've found expect the DB to be accessible outside the VPC. For security, it also seems like a bad idea to make the DB accessible outside the VPC.
The first thing that comes to mind is to have terraform launch an EC2 instance that creates the user as part of the userdata script and then promptly terminates. This seems like a pretty heavy solution and terraform would create the instance each time terraform apply is run unless the instance is not terminated at the end of the script.
Another option is to pass the superuser and limited user credentials to the server that runs migrations and have that server create the user as necessary. This, however, would mean that the server would have access to the superuser and could do some nefarious things if someone got access to it.
Are there other common patterns for solving this? Do people just use the superuser for everything or open the DB to the outside world?

How to manage linux users with Kubernetes?

I want to secure my clusters with SecurityContext.RunAsUser. The question is simple : which user should I use ?
I know the user with the UID 1001 (for example) inside a container is the user with the UID 1001 on the host. So should I create a user with UID 1001 called pod_user on all my hosts and force all pods to use this user ? Or should I verify there is no user with this UID on all my hosts ? I didn't find best practices guide about this.
I have the same type of question about the Dockerfile : should I declare a user in the Dockerfile with a specific UID and reuse it in SecurityContext.RunAsUser ? Some official images are running with a specific hardcoded user in Dockerfile, and others are running as nobody. I found this interesting post but there is no clear answer: Kubernetes: Linux user management.
My ideal recommendation here would be:
Your Dockerfile RUN adduser to create a non-root user, and switches the USER to that user after doing all of the software installation
You don't need to explicitly specify a user ID in the Kubernetes configuration
It doesn't matter what that user name or numeric ID is, or if it's the same user ID that exists in another container or on the host, just so long as the numeric uid isn't 0
The only place there's a practical problem is if the container tries to store data in a volume. Again, the ideal case would be to avoid this entirely; prefer using a database or cloud storage if that's a possibility.
If you do need to use a local volume, you can specifically configure a security policy with a supplemental fsGroup:. That is an arbitrary numeric group ID (not user ID) which will own the files in the volume, and also will be added to the group list for the main container process.
apiVersion: v1
kind: Pod
spec:
securityContext:
fsGroup: 12345 # or any numeric gid
Don't use hostPath volumes at all: you can't guarantee that the same contents will be on every node if the pod gets rescheduled, and if the node fails, content that lives only on that node will be lost.

Multiple Nodejs applications in elastic beanstalk

I have a nodejs project with multiple services, web and workers. All these services are in the same repo with only difference being the script used to invoke them.
I want different config for each service but I also want to keep them under 1 repo. I could use environments, but then It would mess my real environments like production, staging etc.
How can I use elastic beanstalk for this kind of architecture? Is compose Environments the best solution?
There are a lot of ways to handle this, each with their pros and cons. What I did in the past was upload my configs to a particular S3 bucket that was normally unreadable by public. I would then create a signed URL (good for the next couple years, or whatever) and set it as an environment variable in the Beanstalk config. Then, in my .ebextensions/01-setup.config (or somewhere similar), I have this:
{
"container_commands": {
"copy_config": {
"command": "curl $CONFIG_URL > conf/config.json"
}
}
}
On startup, the container would snag a copy of the config from the S3 bucket, copy it locally, and then the application would start up with this config.
Very simple to maintain.

ReadStream With Lock (NodeJS)

var fileStream = fs.createReadStream(filePath)
how to have readStream with shared/exclusive lock
so, that file can not be deleted or altered
I don't think node exposes any filesystem locking mechanisms.
If you were going to use filesystem for system-wide locks or secure inter-process communication, you'll need to find another way (e.g. sockets).
If it's not security critical there are some ways of making it harder (but not impossible) for other processes to mess with your files:
Use unguessable filenames. require('crypto').getRandomBytes('16').toString('hex')
Narrow permissions when creating files via options on createReadStream.
Run node process as a special user, so files will be owned only by that user. Either configure OS to run node under appropriate user, or have node run as root and switch to another user via process.setuid/setgid.

Amazon AWS EC2 Dashboard and SSH - Saving Text Files to Volumes

As a new user of SSH and the Amazon AWS EC2 Dashboard, I am trying to test to see whether I can, in one instance, save data onto a volume, then access that data from another instance by adding the volume to the instance (after terminating the first instance).
When I create the first instance, the AMI is "Amazon Linux AMI 2014.03.2 (HVM)" and the family is "general purpose" with EBS storage only. I automatically assign a public IP address to the instance. I configure the root volume so that it does NOT delete on termination.
As soon as the instance is launched, I open up PuTTY and set the host name to the instance's Public IP Address under Port 22, and authenticate using a private key saved onto the disc that I have already generated earlier.
Upon signing into the instance, I create a text file by typing the following code:
echo "testing">test.txt
I then confirm that the text "testing" is saved to the file "test.txt":
less test.txt
I see the text "testing", thus confirming that it is saved to the file. (I am assuming at this point that it is saved onto the volume, but I am not entirely sure.)
I then proceed to terminate the instance. I launch another one using the same AMI, same instance type, and a different public IP address. In addition to the root volume, I attempt to add the volume that was used as the root volume for the previous instance. (Oddly enough, the snapchat IDs for the previous volume and the root volume of the new instance are identical.) In addition, I use the same tag instance, the same security group and the same key pair as the previous instance.
I open up PuTTY again, this time using the Public IP Address of the new instance, but still using the same private key and port used for the previous instance. Opening logging in, I type:
less test.txt
but I am greeted with this message:
test.txt: No such file or directory
Is there any advice that anyone can offer me regarding this issue. Is it even possible to store a text file onto a volume? If so, am I performing this operation incorrectly?
As the secondary volume has the same UUID and the Amazon Linux used UUID based identification for root, then there might be a chance that the secondary volume was taken as the root volume. This may be the reason why there would be a mess up in choosing the root volume and the initial attempt to find test.txt would fail.
The reboot might have allowed it to take a different order which is why you were able to find it.

Resources