We want to avoid including "yum update" within the dockerfiile, as it could generate a different container based on when the docker images is built, but obviously this could pose some security problems if a base system needs to be updated. Is the best option really to have an organization wide base system image and update that? The issue there would be that it would require rebuilding and deployment of all applications across the entire organization every time a security update is applied.
An alterative that seems a bit out there for me, would be to simply just ignore security updates within the container and only worry about them on the host machine. The thought process here would be that for an attacker to get into a container, there would need to be a vulnerability on the host machine, another vulnerability within docker-engine to get into the container, and then an additional vulnerability to exploit something within the container, which seems like an incredibly unlikely series of events. With the introduction of user namespacing and seccomp profiles, this seems to further reduce the risk.
Anyway, how can I deal with security updates within the containers, with minimal impact to the CI/CD pipeline, or ideally not having to redeploy the entire infrastructure every so often?
You could lessen the unrepeatability of builds by introducing an intermediate update layer.
Create an image like:
FROM centos:latest
RUN yum update -y
Build the image, tag it and push it. Now your builds won't change unless you decide to change them.
You can either either point your other Dockerfiles to myimage:latest to get automatic updates once you decide to do so or point to a specific release.
The way I have setup my CI system is that a successful (manual) build of the base image with updates triggers a build of any images that depend on it.
A security issue gets reported? Check that an updated package is available or do a temporary fix in the Dockerfile. Trigger a build.
In a while you will have fixed versions of all your apps ready to be deployed.
Most major distributions will frequently release a new base image which includes the latest critical updates and security fixes as necessary. This means that you can simply pull the latest base image to get those fixes and rebuild your image.
But also since your containers are using yum, you can leverage yum to control which packages you update. Yum allows you to set a release version so you can pin your updates to a specific OS release.
For example, if you're using RHEL 7.2, you might have a Dockerfile which looks something like:
FROM rhel:7.2
RUN echo "7.2" > /etc/yum/vars/releasever
RUN yum update -y && yum clean all
This ensures that you will stay on RHEL 7.2 and only receive critical package updates, even when you do a full yum update.
For more info about available yum variables or other configuration options, just look at the 'yum.conf' man page.
Also, if you need finer grained control of updates, you can check out the 'yum-plugin-versionlock' package, but that's more than likely overkill for your needs.
Related
guys,
For various projects, I'm creating single Docker environments. Each Docker container consists of Debian, Nginx, Node.js, etc. and is going to use by developers as well as in production via Google Cloud's Kubernetes. Since the Node.js/module version should be everywhere the same, I would like to restrict the access to certain npm commands (somehow). Often developers work with different Node.js and project modules and that caused a lot of trouble in the past. With the Docker containers, I can provide environments with everything you need for a project. To finish this step, I would like to restrict the npm command execution and only allow arguments like install, test, etc.
Please drop me a comment if you know how to resolve this :)
Cheers
It is almost impossible to limit your developers to run some commands in the container if they have an access to Dockerfiles and can somehow change a build flow.
But, because container providing isolation and you can build a custom container for which application based on your basic image, it can be not a big problem if the version of any package for one application will be changed somehow, as an example in a build step, because it will not affect other apps. They just have different containers.
So, you will not have a problem with compatibility like when you using one server with many application which using a shared environment.
The only one thing you need to do - make sure that nobody change container which you using as a base image.
I need to have over the air (OTA) update for a Raspberry Pi board running Debian. I'm thinking of running a cron job on an apt-get update and have my own private repository. So I can push my updates to the repository and the system will automatically pull these updates.
My question is in regard with the security. Is this a safe way of doing OTA or could this potentially allow hackers to push malicious "updates" to my device?
If you do a apt-get update just your sources.lst gets renewed.
In case you mean apt-get update && apt-get upgrade (which actually updates your system) I think it does not depend on how you invoke your update but rather on how secure the server is which holds the repository and of course the source where you are getting your new packages (the most save way would be to build them yourself from source).
Ran into the same situation, hosting a python script. What could be the attack vectors:
manipulate your repo on the server
man in the middle attack
direct attack to the client
For 1 and 2 we should analyse the code before starting: A CRC might be retrieved from the server to verify. Unfortunately automation would render this protection unusable. HTTPS is not helping for 1, only a secure server and may be a ciphered directory name. /2q3r82fnqwrt324w978/23r82fj2q.py
For all points it would make sense to check for commands in the script, e.g. sudo, or from https://www.kevinlondon.com/2015/07/26/dangerous-python-functions.html
Finally, yet importantly, an idea to compare the new to the old code and only accept minor changes. However this prevents rewriting the code.
On my current server i use unattended-upgrades to automatically handle security updates.
But i'm wondering what people would suggest for working inside docker containers.
I have several docker containers running for each service of my app.
Should i have the unattended-upgrades setup in each? Or maybe upgrade them locally and push the upgraded images up? Any other ideas?
Does anyone have any experience with this in production maybe?
I do updates automatically as you did (before). I currently have Stage containers and nothing in Prod, yet. But there is no harm done applying updates to each container: some redundant networking activity, perhaps, if you have multiple containers based in the same image, but harmless otherwise.
Rebuilding a container strikes me as unnecessarily time consuming and involves a more complex process.
WRT Time:
The time to rebuild is added to the time needed to update so it is 'extra' time in that sense. And if you have start-up processes for your container, those have to be repeated.
WRT Complexity:
On the one hand you are simply running updates with apt. On the other you are basically acting as an integration server: the more steps, the more to go wrong.
Also, the updates do not create a 'golden image' since it is easily repeatable.
And finally, since the kernel is not ever actually updated, you would not ever need to restart the container.
I would rebuild the container. They are usually oriented to run one app, and may have little sense to update the supporting filesystem and all the included but not used/exposed apps there.
Having the data in a separate volume let you have a script that rebuilds the container and restarts it. It would have the advantage that loading another container from that image or pushing through a repository to another server would have all the fixes applied.
I want to be able to keep up on server updates/patches on my Google Compute Engine instance.
In comparison, when logging into an Amazon EC2 server over a terminal, they tell you there are updates available and you simply do # yum install updates. Done!
Upon login to Google's Compute Engine (GCE), there is no indication. When doing a # yum install updates, it goes out to check and always comes back with no updates.
From what I can gather, it may be necessary to check more or better repositories -- ???
Here's what I get when doing a yum install updates on the CentOS GCE now (default):
yum install updates
Loaded plugins: downloadonly, fastestmirror, security
Loading mirror speeds from cached hostfile
base: mirror.anl.gov
epel: mirrors.tummy.com
extras: centos.chi.host-engine.com
updates: mirror.thelinuxfix.com
Setting up Install Process
No package updates available.
Error: Nothing to do
What am I not understanding here?
What is the best practice to be sure that the updates/patches are kept up on?
Thanks in advance to he/she who populates the answer(s).
The short answer is run yum update as root.
The longer answer, for automatic updates or notifications, it looks like the current guidance is towards yum-updatesd. This is a package which can send email and/or write to logs if updates are needed. It can also, optionally download them or apply the updates.
There is also a package named yum-cron which will download and apply updates and email the root user with the details of what was performed. A web search on either of these package names will provide you more information about their use.
Just to clarify some confusion that it appears you are having, when you run yum install updates you are asking yum to install a package that is literally named "updates".
The error message yum shows when you try to install a package literally named "updates" unfortunately can be easily parsed as "there are no updates available" instead of the intended "there is no package named 'updates' available." It might be worth making a feature request or sending a patch to ask the yum maintainers to clarify that error message.
This is an old question but thought I'd still answer here in case it helps someone
GCE CentOS images should already come preconfigured with automatic upgrades enabled. From the GCE Documentation
Automatic updates
Compute Engine does not automatically update the operating system or the software on your instances. However, the CentOS package manager is preconfigured by the operating system vendor to automatically apply security patches and system upgrades on your CentOS instance.
These automatic updates from the operating system vendor do not upgrade instances between major versions of the operating system. The updates apply system upgrades only for minor versions. CentOS instances can automatically update their installed packages in addition to the security patches and system upgrades.
Also in case of RHEL/Debian while GCE doesn't automatically update outdated packages the OS itself has a feature to auto-upgrade itself and install critical updates. For e.g. in Debian that would be via the unattended-upgrades tool which should already be enabled
I'm looking for ways in which to deploy some web services into production in a consistent and timely manner.
I'm currently implementing a deployment pipeline that will end with a manual deployment action of a specific version of the software to a number of virtual machines provisioned by Ansible. The idea is to provision x number of instances using version A whilst already having y number of instances running version B. Then image and flick the traffic over. The same mechanism should allow me to scale new vms in a set using the image I already made.
I have considered the following options but was wondering if theres something I'm overlooking:
TGZ
The CI environment would build a tarball from a project that has passed unit tests and integration tests. Optionally depednencies would be bundled (removing the need to run npm install on the production machine and relying on network connectivity to public or private npm repository).
My main issue here is that any dependencies that depend on system libraries would be build on a different machine (albeit the same image). I don't like this.
NPM
The CI environment would publish to a private NPM repository and the Ansible deployment script would check out a specific version after provisioning. Again this suffers from a reliance on external services being available when you want to deploy. I dont like this.
Git
Any system dependent modules become globally installed as part of provisioning and all other dependencies are checked into the repository. This gives me the flexibility of being able to do differential deployments whereby just the deltas are pushed and the application daemon can be restarted automatically by the process manager almost instantly. Dependencies are then absolutely locked down.
This would mean that theres no need to spinning up new VM unless to scale. Deployments can be pushed straight to all active instances.
First and foremost, regardless of the deployment method, you need to make sure you don't drop requests while deploying new code. One simple approach is removing the node from a load balancer prior to switchover. Before doing so, you may also want to try and evaluate if there are pending requests, open connections, or anything else negatively impacted by premature termination. Or perhaps something like the up module.
Most people would not recommend source controlling your modules. It seems that a .tgz with your node_modules already filled in from an npm install while utilizing a bundledDependencies declaration in your package.json might cover all your concerns. With this approach, an npm install on your nodes will not download and install everything again. Though, it will rebuild node-gyp implementations which may cover your system library concern.
You can also make use of git tags to more easily keep track of versions with specific dependencies and payloads. Manually deploying the code may get tedious, you may want to consider automating the routine while iterating over x amount of known server entries in a database from an interface. docker.io may be of interest.