Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
If you update, what kinds of problems can happen before you reboot? This happens especially frequently if you use unattended-upgrade to apply security patches.
Shared objects get replaced and so it is possible for programs to get out of sync with each other.
How long can you go safely before rebooting?
Clarification:
What I meant by "can programs get out of sync with one another" is that one binary has the earlier version of the shared object and a newly launched instance has the newer version of the shared object. It seems to me that if those versions are incompatible that the two binaries may not interoperate properly.
And does this happen in practice very often?
More clarification:
What I'm getting at is more along the lines that installers typically start/stop services that depend on a shared library so that they will get the new version of an API. If they get all the dependencies, then you are probably ok. But do people see installers missing dependencies often?
If a service is written to support all previous API versions compatibly, then this will not be an issue. But I suspect that often it is not done.
If there are kernel updates, especially if there are incompatible ABI changes, I don't see how you can get all the dependencies. I was looking for experience with whether and how things "tip over" and whether people have observed this in practice, either for kernel updates or for library/package updates.
Yes, this probably should have been put into ServerFault...
There are two versions of an executable file at any moment in time; the one in memory and the one in disk.
When you update, the one on disk gets replaced; the one in memory is the old one. If it's a shared object, it stays there until every application that uses it quits; if it's the kernel, it stays there until you reboot.
Bluntly put, if it's a security vulnerability you're updating for, the vulnerability stays until you load the (hopefully) patched version. So if it's a kernel, you aren't safe until you reboot. If it's a shared object, a reboot guarantees safety.
Basically, I'd say it depends on the category of the vulnerability. If it's security, restart whatever is affected. Otherwise, well, unless the bug is adversely affecting you, I wouldn't worry. If it's the kernel, I always reboot.
Related
I am building an LSM module, which should work with SELinux. It must be registered before SELinux to do the task it needs to do.
Does SELinux fail registration if it is not the first LSM Module to register?
EDIT
I realise this is 5 years too late, and the LSM Framework is a moving target. This may not have been correct at the time you posted this, or maybe incorrect in the future
No. SELinux loads last in the module order
Selinux will stack along side any other non-legacy-major LSM(!SMACK SELinux or Apparmor)
Currently Yama is compiled into the kernel by default, which shortens the scope for ptrace operations, Yama is one of the first LSMs to load, yet currently works well with SELinux.
The LSM devs are working on away to remove the exclusivity of certain LSMs and are just about ready to make AppArmor ready for this change.
The end goal is to have the capability of an unlimited amount of LSMs compiled into the kernel, although some work needs to be done before this can be achieved
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a small multi-core server that performs a variety of tasks, most of which are multi-threaded and have been speed-tuned satisfactorily. However, some of the tasks rely on existing single-threaded applications that occasionally block performance of the time-sensitive batch processes (as a concrete example, an occasional dump of the database system that streams through bzip2, a single-threaded process, will lock certain database records throughout the dump process, which may take 7-10hours, interfering with other database operations). Obviously, there no way to natively run the single-thread process through multiple CPUs other than to replace it with a multi-threaded fork of the original project. There are several multi-threaded alternatives to bzip2. However, there are a host of other problematic single-thread applications, and I'd prefer to reduce the number of applications on the server that require maintenance and testing.
To that end, I'm looking for a generic solution to run exiting single-threaded applications on existing hardware (i.e. an abstraction program that would essentially subdivide and reassemble the instruction sets across multiple processors). I've thought about virtualization solutions, but have little experience with such tools and can not seem to find features of same that would satisfy the aforementioned use case. Note the existing hardware is 64-bit, capable of virtualization and running non-BSD Linux.
Many thanks!
You cannot make a single threaded application multithreaded. It doesn't work that way.
What you can do is cluster single threaded applications - ie run multiple copies of them simultaneously.
An example of this can be seen with node.js - A single threaded event driven java-script based environment. There are tools such as http://learnboost.github.io/cluster/ cluster which will manage several instances of a node cluster and balance the work across them.
By running multiple copies you will have a separate process for each instance which will then run on different cores.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Embedded system, no swap, kernel v2.6.36, memory compaction enabled.
Under heavy usage, all the RAM is tied up in cache. Cache was using about 70M of memory. When a user space process allocates memory, no problem, cache gives it up.
But there's a 3rd party device driver that seems to try to allocate a physical 5th order page, and fails with OOM. A quick look at buddyinfo confirms this... no 5th order page available. But as soon as I drop cache, plenty becomes available and the device driver no longer OOM.
So it seems to me that virtual memory allocation will trigger cache drop, but physical memory allocation will not? This doesn't make sense, because then kernel modules are likely to OOM when memory is tied up in cache, and this behavior seems to be more detrimental than slow disk access from no caching.
Is there a tuning parameter to address this?
Thanks!
So here's what's going on. I still don't know why high cache use is causing kernel modules to OOM. The problem is in 3rd party code that we don't have access to, so who knows what they're doing.
I guess one can argue if this is by design, where non-critical disk cache could take all available free memory, and cause kernel modules to OOM, then IMHO, maybe disk cache should leave something for the kernel.
I've decided to instead, limit the cache, so there is always some "truly free" memory left for kernel use, and not depend on "sort of free" memory tied up in cache.
There is a kernel patch I found that will add /proc/sys/vm/pagecache_ratio so you can set how much memory the disk cache could take. But that was never incorporated into the kernel for whatever reason (I thought that was a good idea, especially if disk cache could cause kernel OOM). But I didn't want to mess with kernel patches for maintainability and future-proofing reasons. If someone is just doing a one-shot deal, and doesn't mind patches, here's the link:
http://lwn.net/Articles/218890/
My solution is that I've recompiled the kernel and enabled cgroups, and I'm using that to limit the memory usage for a group of processes that are responsible for lots of disk access (hence running up the cache). After tweaking the configuration, it seems to be working fine. I'll leave my setup running the stress test over the weekend and see if OOM still happens.
Edit
I guess I found my own answer. There are VM tuning parameters in /proc/sys/vm/. Tune-able settings relevant to this issue are: min_free_kbytes, lowmem_reserve_ratio, and extfrag_threshold.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
There are advantages making a process daemonized, as it it detached from the terminal. But the same thing also can be achieved by cron job as well. [ Kindly correct me if not ]
What is the best requirement with which i can differentiate the scenarios when to use cronjob or daemon process?
In general, if your task needs to run more than a few times per hour (maybe <10 minutes) you probably want to run a daemon.
A daemon which is always running, has the following benefits:
It can run at frequencies greater than 1 per minute
It can remember state from its previous run more easily, which makes programming simpler (if you need to remember state) and can improve efficiency in some cases
On an infrastructure with many hosts, it does not cause a "stampedeing herd" effect
Multiple invocations can be avoided more easily (perhaps?)
BUT
If it quits (e.g. following an error), it won't automatically be restarted unless you implemented that feature
It uses memory even when not doing anything useful
Memory leaks are more of a problem.
In general, robustness favours "cron", and performance favours a daemon. But there is a lot of overlap (where either would be ok) and counter-examples. It depends on your exact scenario.
The difference between a cronjob and a daemon is the execution time frame.
A cronjob is a proccess that is executed once in a while. An example of cronjob could be a script that remove the content of a temporary folder once in a while, or a program that sends push notifications every day at 9.00 am to a bunch of devices.
Whereas a daemon is a process running detached from any user, but wont be re-launch if it comes to end.
If you need a service that it permanently available to others, then you need to run a daemon. This is a fairly complicated programming task, since the daemon needs to be able to communicate with the world on a permanent basis (e.g. by listening on a socket or TCP port), and it needs to be written to handle each job cleanly without leaking or even locking up resources for a long time.
By contrast, if you have a specific job whose description can be determined well enough in advance, and which can act automatically without further information, and is self-contained, then it may be entirely sufficient to have a cron job that runs the task periodically. This is much simpler to design for, since you only need a program that runs once for a limited time and then quits.
In a nutshell: A daemon is a single process that runs forever. A cron job is a mechanism to start a new, short-lived process periodically.
A daemon can take advantage of it's longevity by caching state, deferring disk writes, or engaging in prolonged sessions with a client.
A daemon must also be free of memory leaks, as they are likely to accumulate over time and cause a problem.
How does IcedTea 6's performance stand up against Sun's own HotSpot on linux systems? I tried searching Google but Phoronix's test is the best I got, which is almost a year old now. Hopefully things have improved since then.
Also, once Sun completely open sources the JVM, would it be possible to implement it for Linux platforms such that a main module (Quickstarter in the Consumer JRE) starts up with the OS and loads the minimal Java kernel, regardless of any Java apps running. And then progressively load other modules as necessary. Might improve startup times.
so it will be within the answer: http://www.phoronix.com/scan.php?page=article&item=java_vm_performance&num=1 and http://www.phoronix.com/scan.php?page=article&item=os_threeway_2008&num=1
I'd expect SUN's stuff to be faster, but it really depends on all kinds of optimizations, so one version might be faster doing operation X, but in the next version it might not be as fast..
EDIT:
regarding kernel preloading: on linux you may use preload or alternatives to speed up app loading, without affecting the overall system performance (loading a Quickstarter equivalent will keep memory occupied at all times). Also, as far as i know, java loads lots of shared libraries, that are shared between apps, so i don't really see the point of building in-kernel support for this thing. I guess its easy to make a simple app that loads some libraries and does nothing after that(quickstarter), but i dont see this doing a big difference when loading apps, and in some cases it might even slow down the system(i'm thinking about ram usage, and memory swapping)