Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
Is there some additional configuration needed before I can set thread priorities in a Windows service?
In my service, I have a few threads that each call the CreateProcess() function to launch an external application. I would like to adjust thread (or process) priorities to normal or lower, depending on some other factors.
The problem is that SetThreadPriority() function fails with an error 6 (invalid handle?). I'm passing in a handle obtained from PROCESS_INFORMATION::hThread (after calling the CreateProcess() of course), so I think that the handle should be valid.
I've also tried setting the priority on the processes using the SetPriorityClass() function, which also fails.
The service is logged on as a local user.
Maybe you don't have the correct access rights? MSDN on SetThreadPriority says:
hThread [in] A handle to the thread
whose priority value is to be set.
The handle must have the
THREAD_SET_INFORMATION or
THREAD_SET_LIMITED_INFORMATION access
right. For more information, see
Thread Security and Access Rights.
Windows Server 2003 and Windows
XP/2000: The handle must have the
THREAD_SET_INFORMATION access right.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I have a NodeJS and ExpressJS app running with Nginx on it's front. The app is pretty big and we have around a millions users per day. Memory of the app keeps growing as the load increases. And, at a point requests starts getting dropped as there is no more memory left on the server.
My initial guess was some module / snippet is giving memory leaks in the code, explore memory heaps and profiled the app. but, still not found the culprit. Any suggestions??
You can use spawn few more machines with higher RAM. Then use HAProxy and sticky sessions and balance the load accordingly.
Also you can use cluster mode and pm2 tools.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Good day
In my current situation when the jvm crashes or needs to be restarted then the apache tomcat server has to be manually started. I was wondering if there is a way to force the tomcat to start when the jvm finishes starting up. I'm on an ubuntu linux machine.
You are probably a little bit confused - or there is a critical piece of information missing from your question: Tomcat is not something separate from the JVM - it is written in Java and its code is executed within the JVM. On Linux, you will typically have one JVM process for each Java application, such as a Tomcat instance.
Therefore you cannot "start Tomcat once the JVM finishes starting up" - whichever JVM your Tomcat setup is using will just start executing the server bytecode as soon as it is loaded. The Tomcat start-up scripts will launch it with the correct parameters as soon as they are invoked.
I believe that there are four parts to your actual problem:
Determine what the exact behavior of your server is. Is the JVM actually crashing? Or is the Tomcat server encountering a critical exception? Or, perhaps, you just find your server in an unresponsive state? The Linux system logs and the Tomcat log files should contain enough information to tell what is happening.
Or is your Tomcat server just not starting once the OS boots, and you just need to fix your Linux boot configuration?
Determine why that behavior is happening. Is the JVM running out of memory and being terminated by the kernel? Is it crashing due to another issue? Is your web application stuck waiting on e.g. a dead DB server?
Determine how to fix the actual problem. Restarting the application server on a regular basis is a good indication that you need to fix your Tomcat setup - or your application code.
When you have done all you can with the previous steps, only then should you consider an automated solution to help restart your server. There are several service monitoring tools, such as Monit that you could use, although they usually need someone at least moderately experienced on Linux to set-up right.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have been attempting to implement a CPU cap for a specific IIS application pool running on a web server (Win2K8 R2). I have tried using Windows System Resource Manager using several different process matching criteria, but the process never actually gets capped.
First Attempt
The first process matching criteria I got from here. The actual matching criteria I entered was #w3wp.exe.*MyAppPoolName
Then I created my resource allocation policy, and pointed it at the above process matching criteria. I capped the CPU at 25%, enabled the policy, started my app pool, and kicked off the application running in the pool. The app pool's CPU immediately spiked over the 25% limit and stayed there fairly consistently.
Second Attempt
The next matching criteria I tried came from here. The actual matching criteria I entered was #.*w3wp\.exe.*MyAppPoolName.*
I updated my allocation policy to point to the new matching criteria, and started everything back up. Again, immediately spiked over the limit.
Third Attempt
On my third and final attempt, I used the built-in controls in the Add Rule dialog in WSRM. I selected IIS App-Pool from the drop-down, clicked the Select... button, then chose my app pool. The matching criteria it generated was C:\Windows\system32\inetsrv\w3wp.exe * -ap "MyAppPoolName"
Again, I updated my allocation policy, and started everything up. Again, immediately spiked over the limit.
Has anyone else ever actually been successful at implementing one of these allocation policies? They seem very straightforward to set up, but have been nothing but a pain to actually get to work!
Or update to Windows 2012 with IIS 8 which supports this out of the box. IIS 7+ does support CPU monitoring but only offers to kill the application pool if it goes over the limit.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
There are advantages making a process daemonized, as it it detached from the terminal. But the same thing also can be achieved by cron job as well. [ Kindly correct me if not ]
What is the best requirement with which i can differentiate the scenarios when to use cronjob or daemon process?
In general, if your task needs to run more than a few times per hour (maybe <10 minutes) you probably want to run a daemon.
A daemon which is always running, has the following benefits:
It can run at frequencies greater than 1 per minute
It can remember state from its previous run more easily, which makes programming simpler (if you need to remember state) and can improve efficiency in some cases
On an infrastructure with many hosts, it does not cause a "stampedeing herd" effect
Multiple invocations can be avoided more easily (perhaps?)
BUT
If it quits (e.g. following an error), it won't automatically be restarted unless you implemented that feature
It uses memory even when not doing anything useful
Memory leaks are more of a problem.
In general, robustness favours "cron", and performance favours a daemon. But there is a lot of overlap (where either would be ok) and counter-examples. It depends on your exact scenario.
The difference between a cronjob and a daemon is the execution time frame.
A cronjob is a proccess that is executed once in a while. An example of cronjob could be a script that remove the content of a temporary folder once in a while, or a program that sends push notifications every day at 9.00 am to a bunch of devices.
Whereas a daemon is a process running detached from any user, but wont be re-launch if it comes to end.
If you need a service that it permanently available to others, then you need to run a daemon. This is a fairly complicated programming task, since the daemon needs to be able to communicate with the world on a permanent basis (e.g. by listening on a socket or TCP port), and it needs to be written to handle each job cleanly without leaking or even locking up resources for a long time.
By contrast, if you have a specific job whose description can be determined well enough in advance, and which can act automatically without further information, and is self-contained, then it may be entirely sufficient to have a cron job that runs the task periodically. This is much simpler to design for, since you only need a program that runs once for a limited time and then quits.
In a nutshell: A daemon is a single process that runs forever. A cron job is a mechanism to start a new, short-lived process periodically.
A daemon can take advantage of it's longevity by caching state, deferring disk writes, or engaging in prolonged sessions with a client.
A daemon must also be free of memory leaks, as they are likely to accumulate over time and cause a problem.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
If you update, what kinds of problems can happen before you reboot? This happens especially frequently if you use unattended-upgrade to apply security patches.
Shared objects get replaced and so it is possible for programs to get out of sync with each other.
How long can you go safely before rebooting?
Clarification:
What I meant by "can programs get out of sync with one another" is that one binary has the earlier version of the shared object and a newly launched instance has the newer version of the shared object. It seems to me that if those versions are incompatible that the two binaries may not interoperate properly.
And does this happen in practice very often?
More clarification:
What I'm getting at is more along the lines that installers typically start/stop services that depend on a shared library so that they will get the new version of an API. If they get all the dependencies, then you are probably ok. But do people see installers missing dependencies often?
If a service is written to support all previous API versions compatibly, then this will not be an issue. But I suspect that often it is not done.
If there are kernel updates, especially if there are incompatible ABI changes, I don't see how you can get all the dependencies. I was looking for experience with whether and how things "tip over" and whether people have observed this in practice, either for kernel updates or for library/package updates.
Yes, this probably should have been put into ServerFault...
There are two versions of an executable file at any moment in time; the one in memory and the one in disk.
When you update, the one on disk gets replaced; the one in memory is the old one. If it's a shared object, it stays there until every application that uses it quits; if it's the kernel, it stays there until you reboot.
Bluntly put, if it's a security vulnerability you're updating for, the vulnerability stays until you load the (hopefully) patched version. So if it's a kernel, you aren't safe until you reboot. If it's a shared object, a reboot guarantees safety.
Basically, I'd say it depends on the category of the vulnerability. If it's security, restart whatever is affected. Otherwise, well, unless the bug is adversely affecting you, I wouldn't worry. If it's the kernel, I always reboot.