In Rackspace CloudFiles should I create a container for each user uploads? - rackspace-cloud

Just wanted to know that which one will be better: creating a different container for each user or creating only one container uploading everything related to users?
In the application part, I am going to need to list every file a user has that is why having different containers should be a good idea but on the other side, having thousands of different containers could be bad, not sure.
So, what do you think?
Thanks in advance.

You can do it either way. It would of course take a while to poll that many containers so I would reccomend having the 1 or two containers. The query speed will be superior this way.

Related

Dockerize VueJS app with node/express in single container

This questions must have been asked many times before but i can't find any solution that can help me solve my problem.
I'm running VueJS application with Express/NodeJS as server and I know the best way is probably to separate them in 2 containers. But how can I make this work in 1 container with multi-stage or any other way.
Any tips would be appreciated! Thank you!
Run multiple services in a container
A container’s main running process is the ENTRYPOINT and/or CMD at the end of the Dockerfile. It is generally recommended that you separate areas of concern by using one service per container. That service may fork into multiple processes (for example, Apache web server starts multiple worker processes). It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application. You can connect multiple containers using user-defined networks and shared volumes.
If you need to run more than one service within a container, you can accomplish this in a few different ways.
Reference: https://docs.docker.com/config/containers/multi-service_container/

How to mount a file and access it from application in a container kubernetes

I am looking for a best solution for a problem where lets say an application has to access a csv file (say employee.csv) and does some operations such as getEmployee or updateEmployee etc.
Which Volume is best suitable for this and why?
Please note that employee.csv will have some pre-loaded data already.
Also to be precise we are using azure-cli for handling kubernetes.
Please Help!!
My first question would be: is your application meant to be scalable (i.e. have multiple instances running at the same time)? If that is the case, then you should choose a volume that can be written by multiple instances at the same time (ReadWriteMany, https://kubernetes.io/docs/concepts/storage/persistent-volumes/). As you are using Azure, the AzureFile volume could fit your case. However, I am concerned that there could be a conflict with multiple writers (and some data may be lost). My advice would be to better use a Database System so you avoid this kind of situations.
If you only want to have one writer, then you could use pretty much any of them. However, if you use local volumes you could have problems when a pod get rescheduled on another host (it would not be able to retrieve the data). Given the requirements that you have (a simple csv file), the reason I would give you for using one PersistentVolume provider instead of another would be the less painful to setup. In this sense, just like before, if you are using Azure you could simply use an AzureFile volume type, as it should be more straightforward to configure in that cloud: https://learn.microsoft.com/en-us/azure/aks/azure-files

Setting up ELB on AWS for Node.js

I have a very basic question but I have been searching the internet for days without finding what I am looking for.
I currently run one instance on AWS.
That instance has my node server and my database on it.
I would like to make use of ELB by separating the one machine that hosts both the server and the database:
One machine that is never terminated, which hosts the database
One machine that runs the basic node server, which as well is never terminated
A policy to deploy (and subsequently terminate) additional EC2 instances that run the server when traffic demands it.
First of all I would like to know if this setup makes sense.
Secondly,
I am very confused about the way this should work in practice:
Do all deployed instances run using the same volume or is a snapshot of the volume is used?
In general, how do I set such a system? Again, I searched the web and all of the tutorials and documentations are so generalized for every case that I cannot seem to figure out exactly what to do in my case.
Any tips? Links? Articles? Videos?
Thank you!
You would have an AutoScaling Group with a minimum size of 1, that is configured to use an AMI based on your NodeJS server. The AutoScaling Group would add/remove instances to the ELB as instances are created and deleted.
EBS volumes can not be attached to more than one instance at a time. If you need a shared disk volume you would need to look into the EFS service.
Yes you need to move your database onto a separate server that is not a member of the AutoScaling Group.

Do we really need security updates on docker images

This question has come to my mind many times and I just wanted everyone to pitch in with their thoughts on same.
The first thing that comes to my mind is container is not a VM and is almost equivalent to a process running in isolation in the host instance. Then why do we need to keep updating our docker images with security updates? If we have taken sufficient steps to secure our host instance then docker container should be safe. And even if we think from a different direction of multi layered security if the docker host is compromised then then there is no way to stop hacker from accessing all the containers running on the host; no matter how many security updates you did on the docker image.
Are there any particular scenarios which anybody can share where security updates for docker images has really helped?
Now I understand if somebody want's to update apache running in the container however are there reasons to do OS level security updates for images?
an exploit can be dangerous even if it does not give you access to the underlying operating system. Just being able to do something within the application itself can be a big issue. For example, imagine you could inject scripts into Stackoverflow, impersonate other users, or obtain a copy of the complete user database.
just like any software, Docker (and the underlying OS-provided container mechanism) is not perfect and can also have bugs. As such, there may be ways to bypass the isolation and break out of the sandbox.

Container Implementation

I'm currently working on a project where users will upload projects, but others users will be able to clone those projects (think github-esque).
Now my initial idea is create a container for each project, making it easy to clone them. Though I will still store a reference to each file & it's location in the database.
Would creating a container for each project be the best option, or should I stick to a container per user? I know the file amount limits are huge in the containers, but I feel my initial plan would scale better.
Thoughts people?
This is just personal opinion as I am currently also using rackspace cloud in my project. I think that creating one container for each users will be still good option since you can copy, move object inside container.
And also by creating container for each user you can easily get current size object, container of users so that you can know what current free space they have without using additional calculation of it.

Resources