Security concern for an OTA in Debian - linux

I need to have over the air (OTA) update for a Raspberry Pi board running Debian. I'm thinking of running a cron job on an apt-get update and have my own private repository. So I can push my updates to the repository and the system will automatically pull these updates.
My question is in regard with the security. Is this a safe way of doing OTA or could this potentially allow hackers to push malicious "updates" to my device?

If you do a apt-get update just your sources.lst gets renewed.
In case you mean apt-get update && apt-get upgrade (which actually updates your system) I think it does not depend on how you invoke your update but rather on how secure the server is which holds the repository and of course the source where you are getting your new packages (the most save way would be to build them yourself from source).

Ran into the same situation, hosting a python script. What could be the attack vectors:
manipulate your repo on the server
man in the middle attack
direct attack to the client
For 1 and 2 we should analyse the code before starting: A CRC might be retrieved from the server to verify. Unfortunately automation would render this protection unusable. HTTPS is not helping for 1, only a secure server and may be a ciphered directory name. /2q3r82fnqwrt324w978/23r82fj2q.py
For all points it would make sense to check for commands in the script, e.g. sudo, or from https://www.kevinlondon.com/2015/07/26/dangerous-python-functions.html
Finally, yet importantly, an idea to compare the new to the old code and only accept minor changes. However this prevents rewriting the code.

Related

setting up a linux machine on a webserver [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a server with Hostinger, and I have SSH access.
It lacks a LOT of commands. Here's my bin folder.
https://gyazo.com/4509a9c8868e5a19c01f78ba3e0bf09e
I can use wget, meaning I can grab packages.
How can I get this up and running as an average linux machine? My plan is to use heroku on it (sneaky i know) and run django and such, but it lacks so much to start with it's looking really hard. I'm lacking essentials, including dbkg, apt, make, ect. Tips are appreciated.
There shouldn't be a case when your Linux based server is missing core packages like package manager (As I understood you don't have apt-get).
I'm lacking essentials, including dbkg, apt, make, ect.
For me, this server is broken and needs to be reinstalled.
I guess you can try to install apt with wget:
look for apropriate release here: http://security.ubuntu.com/ubuntu/pool/main/a/apt/
Example: wget http://security.ubuntu.com/ubuntu/pool/main/a/apt/apt_1.7.0_amd64.deb
Install dpkg -i apt_1.4_amd64.deb
But maybe you are on different OS than you think? Have you tried to install with yum or dnf? To make sure what os you run type:
cat /etc/*release
or
lsb_release -a
Going back to your question on how to setup Linux server.
1. Update
[package manager] update
If you run Debian based OS use apt-get as a package manager, if Centos based use yum or dnf (dnf is updated yum) and Arch uses pacman. For other distributions look it up.
2. Install
Install packages you require. To make life easier you can install groups
yum groupinstall [name_of_group]
For my knowledge apt doesn't have group install but uses meta packages instead (They points to a group of packages) Ex.:
apt-get install build-essential
3. Create users
Avoid using root! Create users for services, processes etc. This is tremendously important for security reasons. More on overall security for Linux
4. Configure
Mayby silly point but configure what needs to be configured. For instance, web servers, ssh, workspace etc. Each use case is different.
5. Orchestrate
If you don't want to set up each time Linux environment by hand from shell you can use tools like Chef or Ansible for doing it for you (Of course you need to configure it first which will take some time, but later you will save much more trust me)
For setting up application environments, I really recommend using Docker. Thanks to this your application will work on any Linux based server which has docker engine installed. Not only maintenance, but deployment will also be child easy. Just download on any server your image, then run container with necessary parameters.
At the end you will need any server with only security, kernel updates and docker engine. Rest of dependencies will be resolved inside your Docker image
Hope it helps
Heroku isn't a web server in the same sense as Apache or Nginx. It's a platform as a service provider. You don't install it on your own server, you use its hosted platform (and it uses whatever web server you bundle into your slug).
I suggest you go through the getting started tutorial for Python, which walks you through deploying a simple Django app on Heroku. That should give you a good idea of how Heroku works.
If your goal is to enable some kind of deploy workflow on your own server (via shared hosting or a server where you have full administrative access) you can search the web for deploy tools. There are all kinds of them, some of which may be more suitable to your needs than others.

How can I deal with security updates in my docker containers?

We want to avoid including "yum update" within the dockerfiile, as it could generate a different container based on when the docker images is built, but obviously this could pose some security problems if a base system needs to be updated. Is the best option really to have an organization wide base system image and update that? The issue there would be that it would require rebuilding and deployment of all applications across the entire organization every time a security update is applied.
An alterative that seems a bit out there for me, would be to simply just ignore security updates within the container and only worry about them on the host machine. The thought process here would be that for an attacker to get into a container, there would need to be a vulnerability on the host machine, another vulnerability within docker-engine to get into the container, and then an additional vulnerability to exploit something within the container, which seems like an incredibly unlikely series of events. With the introduction of user namespacing and seccomp profiles, this seems to further reduce the risk.
Anyway, how can I deal with security updates within the containers, with minimal impact to the CI/CD pipeline, or ideally not having to redeploy the entire infrastructure every so often?
You could lessen the unrepeatability of builds by introducing an intermediate update layer.
Create an image like:
FROM centos:latest
RUN yum update -y
Build the image, tag it and push it. Now your builds won't change unless you decide to change them.
You can either either point your other Dockerfiles to myimage:latest to get automatic updates once you decide to do so or point to a specific release.
The way I have setup my CI system is that a successful (manual) build of the base image with updates triggers a build of any images that depend on it.
A security issue gets reported? Check that an updated package is available or do a temporary fix in the Dockerfile. Trigger a build.
In a while you will have fixed versions of all your apps ready to be deployed.
Most major distributions will frequently release a new base image which includes the latest critical updates and security fixes as necessary. This means that you can simply pull the latest base image to get those fixes and rebuild your image.
But also since your containers are using yum, you can leverage yum to control which packages you update. Yum allows you to set a release version so you can pin your updates to a specific OS release.
For example, if you're using RHEL 7.2, you might have a Dockerfile which looks something like:
FROM rhel:7.2
RUN echo "7.2" > /etc/yum/vars/releasever
RUN yum update -y && yum clean all
This ensures that you will stay on RHEL 7.2 and only receive critical package updates, even when you do a full yum update.
For more info about available yum variables or other configuration options, just look at the 'yum.conf' man page.
Also, if you need finer grained control of updates, you can check out the 'yum-plugin-versionlock' package, but that's more than likely overkill for your needs.

How to upgrade to the latest OpenSSH on CentOS

It seems that OpenSSH versions 5.4 thru 7.1 are vulnerable to an exploit that can trick the server into leaking the SSH keys that grant access to the service.
What is the best and safest way to upgrade to a patched version of OpenSSH on CentOS. Best being easiest and safest being not accidentally locking myself out of the remote server.
I do know that replacing the keys after the upgrade is crucial. Should I be using yum for this?
exploit that can trick the server into leaking the SSH keys that grant access to the service.
No. It is a bug in client. The compromised server might get the keys from client and not the other way round.
The simplest way is to update ssh_config with UseRoaming no. Although the updates using yum is standard and does basically the same:
sudo yum update
The updates for CentOS should be ready by now, so the above command should give you working update.
You can do the update using yum, and afterwords restart the service. It keeps the old connection open even if you update and restart the service.

Should I use another user than the root when installing NGiNX

I herd that it would be better to use a sub-user for installing NGiNX. Is it true? I am thinking to use NGiNX to install virtual-host that my clients could use for there website and I don't want them to have to much control over NGiNX...
I am using Ubuntu Linux distro.
Thanks in advance for any help and/or tips.
How are you planning to install these applications? Since you say you're using Ubuntu, then I would assume that you'll be installing apps via either the graphical manager or by apt-get or aptitude.
If you're using the graphical program manager, then it should prompt you for your password; this performs a sudo under the hood.
If you're using either apt-get or aptitude or something similar, those programs need to be run as root to install.
In both instances above, the installation scripts for the packages will (should) handle any user-related issues that are necessary for the program you're installing to function properly. For example, when I did an apt-get install jenkins, the installation scripts automatically created a jenkins user for me, and my Jenkins CI server runs as the jenkins user automatically.
Of course, if you're compiling all of these programs by hand, all bets are off and you'll need to figure out how best to do all of this yourself. Of course, if you're compiling these programs by hand to get them installed, I'd have to question why you're using Ubuntu in the first place; one of the best parts to using a Linux distribution with sane package management capabilities is actually USING said package management! (Note: by this statement, I mean anything Debian-based for sure; and I understand that Red Hat's yum provides very similar capabilities, but I haven't used anything RedHat since around 2003.)
You don't want a process to have any more access than it needs. So yes, you should use a user besides root -- one that has the minimal privileges required to read the files it needs. Typically this involves creating a new nginx (or www or similar) user specifically for the task.

Is Mercurial Server a must for using Mercurial?

I am trying to pick a version control software for our team but I don't have much experience for it before. After searching and googling, it seems Mercurial is a good try. However, I am a little bit confused about some general information about it. Basically, our team only have 5 people and we all connect to a server machine which will be used to store the repositories. The server is a Redhat Linux system. We probably use a lot of the centralized workflow. Because I like the local commit idea, I still prefer the DVCS kind software. Now I am trying to install mercurial. Here are my questions.
1) Does the server used for repositories always need to be installed the software "mercurial-server "? Or it depends on what kind of workflow it uses ? In other words, is it true if there is no centralized workflow used for works, then the server can be installed by "mercurial client" ?
I am confused about the term "mercurial-server". Or it means the mercurial installed on the server is always called "mercurial server" and it does matter if it is centralized or not. In addition, because we all work on that server, does it mean only one copy of mercurial is required to install there ? We all have our own user directory such as /home/Cassie, /home/John,... and /home/Joe.
2) Is SSH a must ? Or it depends on what kind of connection between users and the server ? So since we all work in the server, the SSH is not required right ?
Thank you very much,
There are two things that can be called a "mercurial server".
One is simply a social convention that "repository X on the shared drive is our common repository". You can safely push and pull to that mounted repository and use it as a common "trunk" for your development.
A second might be particular software that allows mercurial to connect remotely. There are many options for setting this up yourself, as well as options for other remote hosting.
Take a look at the first link for a list of the different connection options. But as a specific answer to #2: No, you don't need to use SSH, but it's often the simplest option if you're in an environment using it anyways.
The term that you probably want to use, rather than "mercurial server", is "remote repository". This term is used to describe the "other repository" (the one you're not executing the command from) for push/pull/clone/incoming/outgoing/others-that-i'm-forgetting commands. The remote repository can be either another repository on the same disk, or something over a network.
Typically you use one shared repository to share the code between different developers. While you don't need it technically, it has the advantage that it is easier to synchronize when there is a single spot for the fresh software.
In the simplest case this can be a repository on a simple file share where file locking is possible (NFS or SMB), where each developer has write access. In this scenario there is no need to have mercurial installed on the server, but there are drawbacks:
Every developer must have a mercurial version installed, which can handle the repo version on the share (as an example, when the repo on the share is created with mercurial 1.9, a developer with 1.3 can't access this repo)
Every developer can issue destructive operations on the shared repo, including the deletion of the whole repo.
You can't reliably run hooks on such a repo, since the hooks are executed on the developer machines, and not on the server
I suggest to use the http or ssh method. You need to have mercurial installed on the server for this (I'm not taking the http-static method into account, since you can't push into a http-static path), and get the following advantages:
the mercurial version on the server does not need to be the same as the clients, since mercurial uses a version-independent wire protocol
you can't perform destructive operations via these protocols (you can only append new revisions to a remote repo, but never remove any of them)
The decision between http and ssh depends on you local network environment. http has the advantage that it bypasses many corporate firewalls, but you need to take care about secure authentication when you want to push stuff over http back into the server (or don't want everybody to see the content). On the other hand ssh has the drawback that you might need to secure the server, so that the clients can't run arbitrary programs there (it depends on how trustworthy your clients are).
I second Rudi's answer that you should use http or ssh access to the main repository (we use http at work).
I want to address your question about "mercurial-server".
The basic Mercurial software does offer three server modes:
Using hg serve; this serves a single repository, and I think it's more used for quick hacks (when the main server is down, and you need to pull some changes from a colleague, for example).
Using hgwebdir.cgi; this is a cgi script that can be used with an HTTP server such as Apache; it can serve multiple repositories.
Using ssh (Secure Shell) access; I don't know much about it, but I believe that it is more difficult to set up than the hgwebdir variant
There is also a separate software package called "mercurial-server". This is provided by a different company; its homepage is http://www.lshift.net/mercurial-server.html. As far as I can tell, this is a management interface for option 3, the mercurial ssh server.
So, no, you don't need to have mercurial-server installed; the mercurial package already provides a server.

Resources