A lot of programs you download can be run in a blocking manner or in the background (usually by start/stop/etc commands). Some good examples are HA Proxy and Spring Boot apps built to be Linux services... both can be run in either manner.
In system-d unit files you can use "forked" type to allow you to map to start/stop/etc commands for managing a program that runs in the background/as a daemon. Alternatively, you can just use the "simple" type and call the app itself in a blocking manner.
Is there any particular reason to prefer "forked" where it is an option? Having done both options on numerous things, it seems "simple" is lighter on config and more obvious in terms of usage.
This is answered in https://www.freedesktop.org/software/systemd/man/daemon.html section "sysv daemons" there are mostly only downsides of choosing the "forking" method, because most software out there, DO NOT perform the "15 steps" either correctly or at all, in particular, steps 12 and 14 are seldom correctly implemented.
Related
I am trying to understand what are containers and what is their purpose?
I am a little bit confused. When I started to read about them I saw that they rely on the Linux namespaces (is it true?) - a way to isolate the process within the container from the other processes on the machine, and got the impression that their main purpose is security.
For instance, let's say that I own a server that runs multiple services. I also don't want that a single hacked service will be able to hack the whole system. So I put each service inside a container that will make the service unable to interfere the other processes inside the machine, like to kill them or to play with their memory and in that way eliminate the risk.
But later I saw other purposes like being able to ship the app easily? or something like that. so what is their main purpose? I also read that if their main purpose is security - they have a problem. because they run directly on the host kernel (again, is it true?)- and an exploit like the "dirty cow" will or was able to get out of the container and be able to corrupt the machine. So I ended reading about the gVisor - which from what I understood tries to secure the containers, and in some cases succeed. So - what does gVisor do differently? that it's able to secure the containers? is gVisor a container itself? or just a Runtime environment for containers?
eventually, I always see comparisons between containers and VM and I ask why? And when should I use them?
I don't know if anything that I wrote is correct, and I will be glad if you will point out my mistakes, and answer my questions. Yes, I know that there are a lot of them and I am sorry, but Thanks!
The answer below is not guaranteed to be concise. Anyone is welcomed to point out my mistakes.
It might be a little bit vague because many people mixed such concepts nowadays.
1. LXC
When I first got to know such concepts, container still meant LXC, a long-existed technique in Linux. IMHO, container is a complete process that does not simulate a kernel. The difference between a container and a normal process is that container provides a isolated view via cgroups, as if it was in a new operating system. But in fact, the containers still share the host kernel (you are right), so people do worry about the security, especially when you want to deploy it in a public cloud (I don't see people using LXC directly on public cloud yet).
Despite the potential insecurity, the convenience and lightweightness(fast boot, small memory fingerprint) of containers seem to outweigh its drawbacks in most of security-insensitive situations. Tools like docker and kubernetes make large-scale deployment and management more efficient.
2. Virtual Machine & Hardware-assisted virtualization
In contrast to container, the concept Virtual Machine represents another category of isolated execution environment. Considering that most of VMs leverages some hardware-accelerating techniques like VT-x, I will assume you are talking about hardware-assisted virtualization. Virtual Machine usually contains a full kernel inside it.
See this picture from Doug Chamberlain
The Intel VT-x technique provides 2 modes, root mode(privileged) and non-root mode(not privileged). Each mode has its own ring0-ring3 (e.g, non-root ring3, non-root ring0, root ring3, root ring 0). The whole virtual machine runs in non-root mode, and the hypervisor(VMM, e.g., kvm) runs in root-mode.
In the classic qemu+kvm setup, qemu runs in root ring3, and kvm runs in root ring0.
The strong isolation and the existance of guest kernel makes virtual machine more secure and compatible. But, of course, the price is performance and efficiency (slower boot etc.)
Container-based Virtualization
People want the isolation of hardware-assisted virtualization, but don't want to give up the convenience of containers. Therefore, the hybrid solution seems really intuitive to come.
There are 2 typical solutions at present, Kata Container and [gVisor][6]
Kata Container tries to slim the whole stack of virtual machine to make it more lightweight. However, there is still linux inside it and it is still a virtual machine, but more lightweight.
gVisor claims to be an secure container, but it still leverages hardware virtualization techniques (or ptrace if you don't want virtualization). There is a component called sentry, which runs both in non-root ring0 and root ring3. The sentry will do part of the guest kernel's job, but is much smaller than linux. If sentry could not finish a request itself, it proxy the request down to the host kenrel.
The reason why most people believe gvisor is somewhat more secure is that it achieves "defense in depth" -- more layers of indirection lead people to believe it is more secure. This is usually true, but again, is not a guarantee.
I put this question on stackoverflow, because I found lots of quesions on the topic here already.
Short introduction
Simply put, nohup can be used to run apps in the background and keeps them running after the user logs off or the terminal or ssh session is closed e.g.
There are many example quesitons here on stackoverflow, like this or that.
My question is simple.
Why opt for nohup, when there are options like upstart, systemd, ... which manage the app as service in a much more convenient way (runlevels, ...)?
Reading the many questions on similar topics, the only option seems to be nohup. Almost never the answer is something like: "... use an upstart script, so it is all handled for you..."
I would mainly go with e.g. upstart, except maybe for a quick and dirty test scenario.
Am I missing something important?
nohup is quick, easy, doesn't require root access and doesn't make permanent changes in the system. That's why many people use it (or try to use it) instead of configuring services.
Running things in the background without any supervision is usually a bad idea, although there are many legitimate use cases which don't fit traditional service model. For example:
Background process might be needed only sometimes, after certain user actions.
More than one instance might be needed. For example: one per user or one per session.
Process might not need to to be running all the time. It just quits after doing its job.
Some real world examples (which use something like nohup and would be hard to implement as system services):
git will sometimes run git gc in background to optimize repository without blocking user work
adb will start its service in background and keep it running until user asks to terminate it
Some compilers have option to keep running in the background to reduce startup times of subsequent invocations
Your understanding is correct, the way those questions were asked the natural answer is nohup - upstart would be the answer to a different question such as How to make sure an application keeps running on Linux
I'm trying to configure a automation testing system on Linux that reports any inconsistency such as incorrect file permissions, failed services, etc on a custom Linux OS. I can write my own script to do that, but I need a general solution that supports a wide variety of situations and systems.
So, I was wondering if I can configure Chef to only report problems and inconsistencies on Linux, but not fix them?
Kind of. We have a system called "why run mode" which tries to do a dry-run check of what Chef would probably do if executed. Unfortunately because Chef code is, at heart, arbitrary executable Ruby code we can't be 100% certain. That said, I would try it out and see if the output is enough for your use.
I am writing some webservices in Go on a linux machine, so the Go executable needs to keep running
which is the best way to do it?
should I setup the Go executable as a service on the linux machine?
many thanks
The short answer: use the system service manager if you want to keep things super-simple. CentOS currently uses Upstart, and it's well documented and can handle most Go applications without too many problems. There are some good examples of Upstart + Go here and here
The long answer: personal preference. Supervisord, Monit and Circus are good options as well, but bring differing levels of complexity. I personally like supervisord, since it has a fairly clear syntax and a good heap of options.
There's also a good run-down here: http://tech.cueup.com/blog/2013/03/08/running-daemons/
For reasons beyond the scope of this post, I want to run external (user submitted) code similar to the computer language benchmark game. Obviously this needs to be done in a restricted environment. Here are my restriction requirements:
Can only read/write to current working directory (will be large tempdir)
No external access (internet, etc)
Anything else I probably don't care about (e.g., processor/memory usage, etc).
I myself have several restrictions. A solution which uses standard *nix functionality (specifically RHEL 5.x) would be preferred, as then I could use our cluster for the backend. It is also difficult to get software installed there, so something in the base distribution would be optimal.
Now, the questions:
Can this even be done with externally compiled binaries? It seems like it could be possible, but also like it could just be hopeless.
What about if we force the code itself to be submitted, and compile it ourselves. Does that make the problem easier or harder?
Should I just give up on home directory protection, and use a VM/rollback? What about blocking external communication (isn't the VM usually talked to over a bridged LAN connection?)
Something I missed?
Possibly useful ideas:
rssh. Doesn't help with compiled code though
Using a VM with rollback after code finishes (can network be configured so there is a local bridge but no WAN bridge?). Doesn't work on cluster.
I would examine and evaluate both a VM and a special SELinux context.
I don't think you'll be able to do what you need with simple file system protection because you won't be able to prevent access to syscalls which will allow access to the network etc. You can probably use AppArmor to do what you need though. That uses the kernel and virtualizes the foreign binary.