I want to be able to keep up on server updates/patches on my Google Compute Engine instance.
In comparison, when logging into an Amazon EC2 server over a terminal, they tell you there are updates available and you simply do # yum install updates. Done!
Upon login to Google's Compute Engine (GCE), there is no indication. When doing a # yum install updates, it goes out to check and always comes back with no updates.
From what I can gather, it may be necessary to check more or better repositories -- ???
Here's what I get when doing a yum install updates on the CentOS GCE now (default):
yum install updates
Loaded plugins: downloadonly, fastestmirror, security
Loading mirror speeds from cached hostfile
base: mirror.anl.gov
epel: mirrors.tummy.com
extras: centos.chi.host-engine.com
updates: mirror.thelinuxfix.com
Setting up Install Process
No package updates available.
Error: Nothing to do
What am I not understanding here?
What is the best practice to be sure that the updates/patches are kept up on?
Thanks in advance to he/she who populates the answer(s).
The short answer is run yum update as root.
The longer answer, for automatic updates or notifications, it looks like the current guidance is towards yum-updatesd. This is a package which can send email and/or write to logs if updates are needed. It can also, optionally download them or apply the updates.
There is also a package named yum-cron which will download and apply updates and email the root user with the details of what was performed. A web search on either of these package names will provide you more information about their use.
Just to clarify some confusion that it appears you are having, when you run yum install updates you are asking yum to install a package that is literally named "updates".
The error message yum shows when you try to install a package literally named "updates" unfortunately can be easily parsed as "there are no updates available" instead of the intended "there is no package named 'updates' available." It might be worth making a feature request or sending a patch to ask the yum maintainers to clarify that error message.
This is an old question but thought I'd still answer here in case it helps someone
GCE CentOS images should already come preconfigured with automatic upgrades enabled. From the GCE Documentation
Automatic updates
Compute Engine does not automatically update the operating system or the software on your instances. However, the CentOS package manager is preconfigured by the operating system vendor to automatically apply security patches and system upgrades on your CentOS instance.
These automatic updates from the operating system vendor do not upgrade instances between major versions of the operating system. The updates apply system upgrades only for minor versions. CentOS instances can automatically update their installed packages in addition to the security patches and system upgrades.
Also in case of RHEL/Debian while GCE doesn't automatically update outdated packages the OS itself has a feature to auto-upgrade itself and install critical updates. For e.g. in Debian that would be via the unattended-upgrades tool which should already be enabled
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a server with Hostinger, and I have SSH access.
It lacks a LOT of commands. Here's my bin folder.
https://gyazo.com/4509a9c8868e5a19c01f78ba3e0bf09e
I can use wget, meaning I can grab packages.
How can I get this up and running as an average linux machine? My plan is to use heroku on it (sneaky i know) and run django and such, but it lacks so much to start with it's looking really hard. I'm lacking essentials, including dbkg, apt, make, ect. Tips are appreciated.
There shouldn't be a case when your Linux based server is missing core packages like package manager (As I understood you don't have apt-get).
I'm lacking essentials, including dbkg, apt, make, ect.
For me, this server is broken and needs to be reinstalled.
I guess you can try to install apt with wget:
look for apropriate release here: http://security.ubuntu.com/ubuntu/pool/main/a/apt/
Example: wget http://security.ubuntu.com/ubuntu/pool/main/a/apt/apt_1.7.0_amd64.deb
Install dpkg -i apt_1.4_amd64.deb
But maybe you are on different OS than you think? Have you tried to install with yum or dnf? To make sure what os you run type:
cat /etc/*release
or
lsb_release -a
Going back to your question on how to setup Linux server.
1. Update
[package manager] update
If you run Debian based OS use apt-get as a package manager, if Centos based use yum or dnf (dnf is updated yum) and Arch uses pacman. For other distributions look it up.
2. Install
Install packages you require. To make life easier you can install groups
yum groupinstall [name_of_group]
For my knowledge apt doesn't have group install but uses meta packages instead (They points to a group of packages) Ex.:
apt-get install build-essential
3. Create users
Avoid using root! Create users for services, processes etc. This is tremendously important for security reasons. More on overall security for Linux
4. Configure
Mayby silly point but configure what needs to be configured. For instance, web servers, ssh, workspace etc. Each use case is different.
5. Orchestrate
If you don't want to set up each time Linux environment by hand from shell you can use tools like Chef or Ansible for doing it for you (Of course you need to configure it first which will take some time, but later you will save much more trust me)
For setting up application environments, I really recommend using Docker. Thanks to this your application will work on any Linux based server which has docker engine installed. Not only maintenance, but deployment will also be child easy. Just download on any server your image, then run container with necessary parameters.
At the end you will need any server with only security, kernel updates and docker engine. Rest of dependencies will be resolved inside your Docker image
Hope it helps
Heroku isn't a web server in the same sense as Apache or Nginx. It's a platform as a service provider. You don't install it on your own server, you use its hosted platform (and it uses whatever web server you bundle into your slug).
I suggest you go through the getting started tutorial for Python, which walks you through deploying a simple Django app on Heroku. That should give you a good idea of how Heroku works.
If your goal is to enable some kind of deploy workflow on your own server (via shared hosting or a server where you have full administrative access) you can search the web for deploy tools. There are all kinds of them, some of which may be more suitable to your needs than others.
We want to avoid including "yum update" within the dockerfiile, as it could generate a different container based on when the docker images is built, but obviously this could pose some security problems if a base system needs to be updated. Is the best option really to have an organization wide base system image and update that? The issue there would be that it would require rebuilding and deployment of all applications across the entire organization every time a security update is applied.
An alterative that seems a bit out there for me, would be to simply just ignore security updates within the container and only worry about them on the host machine. The thought process here would be that for an attacker to get into a container, there would need to be a vulnerability on the host machine, another vulnerability within docker-engine to get into the container, and then an additional vulnerability to exploit something within the container, which seems like an incredibly unlikely series of events. With the introduction of user namespacing and seccomp profiles, this seems to further reduce the risk.
Anyway, how can I deal with security updates within the containers, with minimal impact to the CI/CD pipeline, or ideally not having to redeploy the entire infrastructure every so often?
You could lessen the unrepeatability of builds by introducing an intermediate update layer.
Create an image like:
FROM centos:latest
RUN yum update -y
Build the image, tag it and push it. Now your builds won't change unless you decide to change them.
You can either either point your other Dockerfiles to myimage:latest to get automatic updates once you decide to do so or point to a specific release.
The way I have setup my CI system is that a successful (manual) build of the base image with updates triggers a build of any images that depend on it.
A security issue gets reported? Check that an updated package is available or do a temporary fix in the Dockerfile. Trigger a build.
In a while you will have fixed versions of all your apps ready to be deployed.
Most major distributions will frequently release a new base image which includes the latest critical updates and security fixes as necessary. This means that you can simply pull the latest base image to get those fixes and rebuild your image.
But also since your containers are using yum, you can leverage yum to control which packages you update. Yum allows you to set a release version so you can pin your updates to a specific OS release.
For example, if you're using RHEL 7.2, you might have a Dockerfile which looks something like:
FROM rhel:7.2
RUN echo "7.2" > /etc/yum/vars/releasever
RUN yum update -y && yum clean all
This ensures that you will stay on RHEL 7.2 and only receive critical package updates, even when you do a full yum update.
For more info about available yum variables or other configuration options, just look at the 'yum.conf' man page.
Also, if you need finer grained control of updates, you can check out the 'yum-plugin-versionlock' package, but that's more than likely overkill for your needs.
Currently I'm manually distributing and updating two applications over 50 computers running CentOS 6.5 and Ubuntu 14.04. Each time the new version is available for either of my applications,i have to copy all files and update it in all the computers by manually.its very time consuming and frustrating.
to avoid this manual process over 50 computers,I like to maintain a central server that contain the latest version of the applications and whenever need to install or update just type a command in client pc like we use in CentOS and Ubuntu to install a software
in Ubuntu
sudo apt-get install vlc
and in Cent OS
sudo yum install vlc
one of the programs written in java and other is written in python
I google it and can't find any good and useful source about how to do this.
some one alrady done this or knows how to achive this please help.
You need to create packages to make this happen.
Ubuntu uses the Debian package format, so you can use Debian's New Maintainer's Guide, which is the canonical tutorial on how to create a Debian package. It makes the assumption that you're going to upload the package to Debian, which in your case isn't true, but that just means you need to skip some sections of the document.
For RPM, there isn't such a document AFAIK, but there is the book 'max rpm' (which unfortunately is somewhat outdated), and fedora has augmented that with some guidelines and best practices which they've put on their wiki. Since RHEL is created by forking fedora and stabilizing that, and since CentOS is based on RHEL, what goes for fedora goes for CentOS, too.
These methods will create packages manually, which is always the best way and will result in the least problems afterwards. However, they take time. If you don't want to spend that time, there are also a few options to generate packages which will automate part or all of the job for you. Personally, however, I'm not a fan of these methods and therefore wouldn't recommend them.
Finally, another option is to not create packages, but to use a config management system like puppet to automate the deployment. It's even available in Ubuntu and EPEL.
edit I notice you may actually be asking about creating a repository instead. That's a different thing. There are several tools to help you do that; at core, all they do is run createrepo for RPM packages, or dpkg-scanpackages for debian packages. You can do that yourself, or investigate time in a tool like reprepro or aptly or some such.
I need to have over the air (OTA) update for a Raspberry Pi board running Debian. I'm thinking of running a cron job on an apt-get update and have my own private repository. So I can push my updates to the repository and the system will automatically pull these updates.
My question is in regard with the security. Is this a safe way of doing OTA or could this potentially allow hackers to push malicious "updates" to my device?
If you do a apt-get update just your sources.lst gets renewed.
In case you mean apt-get update && apt-get upgrade (which actually updates your system) I think it does not depend on how you invoke your update but rather on how secure the server is which holds the repository and of course the source where you are getting your new packages (the most save way would be to build them yourself from source).
Ran into the same situation, hosting a python script. What could be the attack vectors:
manipulate your repo on the server
man in the middle attack
direct attack to the client
For 1 and 2 we should analyse the code before starting: A CRC might be retrieved from the server to verify. Unfortunately automation would render this protection unusable. HTTPS is not helping for 1, only a secure server and may be a ciphered directory name. /2q3r82fnqwrt324w978/23r82fj2q.py
For all points it would make sense to check for commands in the script, e.g. sudo, or from https://www.kevinlondon.com/2015/07/26/dangerous-python-functions.html
Finally, yet importantly, an idea to compare the new to the old code and only accept minor changes. However this prevents rewriting the code.
My company is developing a Linux based software product which is shipped to different customers.
The product it self consits out of small software components which interact with each other.
What we usually ship as an update/ new release to the customer are the the current versions of the different software components e.g. compA-2.0.1, compB-3.2.3 and compC-4.1.2
Currently we employ a rather simple shell script for the installation/ upgarding process. However, we'd like to move forwarard to state of the art packaging, mainly to have an easy way of swapping different versions of components, keeping track of files and the packages they belong to and also to provide the customers with an easier interface for the update/ installation.
The software components are installed in different directories, depending on the customers demands. So it could be in /opt, /usr/local or something completely different.
Since the vast majority of our customers runs on rpm-based Linux distributions we decided for rpm-packages instead of dpkg.
In rpm terms our problem is a non-root installation. This is realativly straight forward using the following features:
own rpm database using the --dbpath option
installing in different locations using the Prefix mechanism
optional: disabling auto library dependancies using AutoReqProv: no in the rpm spec file
Using those features/ options allows us to create rpm packages which can be installed using the rpm command line tool as non-root user.
However, what we really would like to see is to install those packages via a http repository with either yum or zypper. The latter one is the tool of choice in SUSE based distributions.
The problem we see is, that non of the tools is providing the required alternative rpm database option (--dbath in rpm) and prefix support required for a non-root installation.
Does anybody have a suggestion/ idea how to deal with this issue? Is there maybe a third package-tool with we're not aware of?
Or should we maybe go a totally different route? I had a play with GNU stow and wrote some very simplistic yum-like logic around it - but then I would basically start my own package installation tool which I tried to circumvent.