How to upgrade to the latest OpenSSH on CentOS - linux

It seems that OpenSSH versions 5.4 thru 7.1 are vulnerable to an exploit that can trick the server into leaking the SSH keys that grant access to the service.
What is the best and safest way to upgrade to a patched version of OpenSSH on CentOS. Best being easiest and safest being not accidentally locking myself out of the remote server.
I do know that replacing the keys after the upgrade is crucial. Should I be using yum for this?

exploit that can trick the server into leaking the SSH keys that grant access to the service.
No. It is a bug in client. The compromised server might get the keys from client and not the other way round.
The simplest way is to update ssh_config with UseRoaming no. Although the updates using yum is standard and does basically the same:
sudo yum update
The updates for CentOS should be ready by now, so the above command should give you working update.

You can do the update using yum, and afterwords restart the service. It keeps the old connection open even if you update and restart the service.

Related

setting up a linux machine on a webserver [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a server with Hostinger, and I have SSH access.
It lacks a LOT of commands. Here's my bin folder.
https://gyazo.com/4509a9c8868e5a19c01f78ba3e0bf09e
I can use wget, meaning I can grab packages.
How can I get this up and running as an average linux machine? My plan is to use heroku on it (sneaky i know) and run django and such, but it lacks so much to start with it's looking really hard. I'm lacking essentials, including dbkg, apt, make, ect. Tips are appreciated.
There shouldn't be a case when your Linux based server is missing core packages like package manager (As I understood you don't have apt-get).
I'm lacking essentials, including dbkg, apt, make, ect.
For me, this server is broken and needs to be reinstalled.
I guess you can try to install apt with wget:
look for apropriate release here: http://security.ubuntu.com/ubuntu/pool/main/a/apt/
Example: wget http://security.ubuntu.com/ubuntu/pool/main/a/apt/apt_1.7.0_amd64.deb
Install dpkg -i apt_1.4_amd64.deb
But maybe you are on different OS than you think? Have you tried to install with yum or dnf? To make sure what os you run type:
cat /etc/*release
or
lsb_release -a
Going back to your question on how to setup Linux server.
1. Update
[package manager] update
If you run Debian based OS use apt-get as a package manager, if Centos based use yum or dnf (dnf is updated yum) and Arch uses pacman. For other distributions look it up.
2. Install
Install packages you require. To make life easier you can install groups
yum groupinstall [name_of_group]
For my knowledge apt doesn't have group install but uses meta packages instead (They points to a group of packages) Ex.:
apt-get install build-essential
3. Create users
Avoid using root! Create users for services, processes etc. This is tremendously important for security reasons. More on overall security for Linux
4. Configure
Mayby silly point but configure what needs to be configured. For instance, web servers, ssh, workspace etc. Each use case is different.
5. Orchestrate
If you don't want to set up each time Linux environment by hand from shell you can use tools like Chef or Ansible for doing it for you (Of course you need to configure it first which will take some time, but later you will save much more trust me)
For setting up application environments, I really recommend using Docker. Thanks to this your application will work on any Linux based server which has docker engine installed. Not only maintenance, but deployment will also be child easy. Just download on any server your image, then run container with necessary parameters.
At the end you will need any server with only security, kernel updates and docker engine. Rest of dependencies will be resolved inside your Docker image
Hope it helps
Heroku isn't a web server in the same sense as Apache or Nginx. It's a platform as a service provider. You don't install it on your own server, you use its hosted platform (and it uses whatever web server you bundle into your slug).
I suggest you go through the getting started tutorial for Python, which walks you through deploying a simple Django app on Heroku. That should give you a good idea of how Heroku works.
If your goal is to enable some kind of deploy workflow on your own server (via shared hosting or a server where you have full administrative access) you can search the web for deploy tools. There are all kinds of them, some of which may be more suitable to your needs than others.

Kurento Media Server automatically creates an extra account "kurento" on installation

On installing kurento-media-server-6.0, it automatically created an extra account "kurento" and the password is still unknown. Although it does not have sudo access, but still an unwanted user account is a security concern.
On deleting user account, kurento-media-server does not function properly and has to be reinstalled. What is the significance of that account or why this account is being created?
OS: Ubuntu 16.04.3 LTS 64-bit
That is how almost all "service" applications work on Ubuntu, and is actually a security feature. Installation of an application creates a user that is used only by that application and can have its privileges limited to only what that application needs.
For example, Apache uses www-data, Nginx uses nginx (or www-data if you install from certain sources), PostgreSQL uses postgres, MySQL uses mysql, Postfix mail server uses postfix, etc.
There's no reason for this to be a security concern. The password isn't "unknown" as you say, there is actually an invalid password -- meaning that it is impossible to log in to this account unless you give it SSH keys or use sudo -u (which only administrators can do anyway).
Just leave the account the way it is.

Security concern for an OTA in Debian

I need to have over the air (OTA) update for a Raspberry Pi board running Debian. I'm thinking of running a cron job on an apt-get update and have my own private repository. So I can push my updates to the repository and the system will automatically pull these updates.
My question is in regard with the security. Is this a safe way of doing OTA or could this potentially allow hackers to push malicious "updates" to my device?
If you do a apt-get update just your sources.lst gets renewed.
In case you mean apt-get update && apt-get upgrade (which actually updates your system) I think it does not depend on how you invoke your update but rather on how secure the server is which holds the repository and of course the source where you are getting your new packages (the most save way would be to build them yourself from source).
Ran into the same situation, hosting a python script. What could be the attack vectors:
manipulate your repo on the server
man in the middle attack
direct attack to the client
For 1 and 2 we should analyse the code before starting: A CRC might be retrieved from the server to verify. Unfortunately automation would render this protection unusable. HTTPS is not helping for 1, only a secure server and may be a ciphered directory name. /2q3r82fnqwrt324w978/23r82fj2q.py
For all points it would make sense to check for commands in the script, e.g. sudo, or from https://www.kevinlondon.com/2015/07/26/dangerous-python-functions.html
Finally, yet importantly, an idea to compare the new to the old code and only accept minor changes. However this prevents rewriting the code.

How to do server updates on Google Compute Engine?

I want to be able to keep up on server updates/patches on my Google Compute Engine instance.
In comparison, when logging into an Amazon EC2 server over a terminal, they tell you there are updates available and you simply do # yum install updates. Done!
Upon login to Google's Compute Engine (GCE), there is no indication. When doing a # yum install updates, it goes out to check and always comes back with no updates.
From what I can gather, it may be necessary to check more or better repositories -- ???
Here's what I get when doing a yum install updates on the CentOS GCE now (default):
yum install updates
Loaded plugins: downloadonly, fastestmirror, security
Loading mirror speeds from cached hostfile
base: mirror.anl.gov
epel: mirrors.tummy.com
extras: centos.chi.host-engine.com
updates: mirror.thelinuxfix.com
Setting up Install Process
No package updates available.
Error: Nothing to do
What am I not understanding here?
What is the best practice to be sure that the updates/patches are kept up on?
Thanks in advance to he/she who populates the answer(s).
The short answer is run yum update as root.
The longer answer, for automatic updates or notifications, it looks like the current guidance is towards yum-updatesd. This is a package which can send email and/or write to logs if updates are needed. It can also, optionally download them or apply the updates.
There is also a package named yum-cron which will download and apply updates and email the root user with the details of what was performed. A web search on either of these package names will provide you more information about their use.
Just to clarify some confusion that it appears you are having, when you run yum install updates you are asking yum to install a package that is literally named "updates".
The error message yum shows when you try to install a package literally named "updates" unfortunately can be easily parsed as "there are no updates available" instead of the intended "there is no package named 'updates' available." It might be worth making a feature request or sending a patch to ask the yum maintainers to clarify that error message.
This is an old question but thought I'd still answer here in case it helps someone
GCE CentOS images should already come preconfigured with automatic upgrades enabled. From the GCE Documentation
Automatic updates
Compute Engine does not automatically update the operating system or the software on your instances. However, the CentOS package manager is preconfigured by the operating system vendor to automatically apply security patches and system upgrades on your CentOS instance.
These automatic updates from the operating system vendor do not upgrade instances between major versions of the operating system. The updates apply system upgrades only for minor versions. CentOS instances can automatically update their installed packages in addition to the security patches and system upgrades.
Also in case of RHEL/Debian while GCE doesn't automatically update outdated packages the OS itself has a feature to auto-upgrade itself and install critical updates. For e.g. in Debian that would be via the unattended-upgrades tool which should already be enabled

Should I use another user than the root when installing NGiNX

I herd that it would be better to use a sub-user for installing NGiNX. Is it true? I am thinking to use NGiNX to install virtual-host that my clients could use for there website and I don't want them to have to much control over NGiNX...
I am using Ubuntu Linux distro.
Thanks in advance for any help and/or tips.
How are you planning to install these applications? Since you say you're using Ubuntu, then I would assume that you'll be installing apps via either the graphical manager or by apt-get or aptitude.
If you're using the graphical program manager, then it should prompt you for your password; this performs a sudo under the hood.
If you're using either apt-get or aptitude or something similar, those programs need to be run as root to install.
In both instances above, the installation scripts for the packages will (should) handle any user-related issues that are necessary for the program you're installing to function properly. For example, when I did an apt-get install jenkins, the installation scripts automatically created a jenkins user for me, and my Jenkins CI server runs as the jenkins user automatically.
Of course, if you're compiling all of these programs by hand, all bets are off and you'll need to figure out how best to do all of this yourself. Of course, if you're compiling these programs by hand to get them installed, I'd have to question why you're using Ubuntu in the first place; one of the best parts to using a Linux distribution with sane package management capabilities is actually USING said package management! (Note: by this statement, I mean anything Debian-based for sure; and I understand that Red Hat's yum provides very similar capabilities, but I haven't used anything RedHat since around 2003.)
You don't want a process to have any more access than it needs. So yes, you should use a user besides root -- one that has the minimal privileges required to read the files it needs. Typically this involves creating a new nginx (or www or similar) user specifically for the task.

Resources