Two xwin-xdg-menu processes with high CPU consumption - cygwin

I have a Windows 7 computer with intel i7 with 2 cores and hyperthreading and a linux virtual machine in a cloud. I don't like VNC (it's laggy) so I use X windowing.
I start my Cygwin XWin with the following command:
C:\cygwin64\bin\run.exe --quote /usr/bin/bash.exe -l -c "cd; /usr/bin/xinit /etc/X11/xinit/startxwinrc -- /usr/bin/XWin :0 -multiwindow -listen tcp"
It's working otherwise just as intended but for some reason it's spawning two xwin-xdg-menu processes of which the other one is consuming 25% of my CPU. When I kill it, the CPU usage returns to normal and everything is working fine, including the other xwin-xdg-menu process.
I tried also this:
C:\cygwin64\bin\XWin.exe :0 -multiwindow -listen tcp
but it makes the application run slowly and with a bad resolution.
Is there a way to start X with listen-tcp with an adapted resolution to my multiple screens I have and without having to manually kill the extra process every time?
It seems I'm not the only one with this problem but for now I haven't found any solution to this.
https://cygwin.com/ml/cygwin/2017-05/msg00345.html
https://superuser.com/questions/1210325/cygwin-at-spi-bus-launcher-and-xwin-xdg-menu-high-cpu (I don't have problems with at-spi-bus-launcher though)

Solution:
Create a ~/.startxwinrc file, and add one line:
exec sleep infinity
Make ~/.startxwinrc executable by running chmod +x ~/.startxwinrc.
Reason I suspect this worked:
startxwin searches for a ~/.startxwinrc file to execute when launching. If startxwin does not find a ~/.startxwinrc file, startxwin will follow the default routine outlined in /etc/X11/xinit/startxwinrc.
The default routine launches /usr/bin/xwin-xdg-menu, somehow causing me to have two xwin-xdg-menu processes, one of them with very high cpu. Creating ~/.startxwinrc bypasses the default routine, disabling /usr/bin/xwin-xdg-menu from launching altogether.
exec sleep infinity keeps the x server alive after launching.

Related

Running perf-top in a script

I have some intermittent performance issues that I want to capture via perf top. The issue is intermittent, so I want to write a script that runs perf top when the issue is occurring so that I can save the data and view it at a later time.
I can't seem to figure out how to get perf top to put its output in a file, it seems to demand to be run interactively. Here is what I've tried so far:
# timeout 10 perf top --stdio -E 20 > 'perf-top'
This does not kill perf, just leaves it running in the background forever until I create another console session, find the PID, and kill it.
# timeout --signal=9 10 perf top --stdio -E 20 > 'perf-top'
This kills perf in the expected 10 seconds, but the output is not written to the file that I specified.
Is there some special way that this command needs to be run? It works if I run it from an interactive ssh session, but I'd really like to be able to run it from a script. I'm trying to put it into an ansible task with a few other metrics-gathering programs.
SIGKILL (9) isn't catchable, so it's impossible for perf to flush any buffered output.
Use any another signal so perf can clean up after receiving the signal and write output.
If the default SIGTERM doesn't work, then maybe try SIGHUP, SIGINT, or others.
timeout --signal=INT
timeout 10 perf top --stdio -E 20 > perf-top (with the default SIGTERM) works for me (as non-root) on my Arch Linux desktop. perf version 5.0.g1c163f4

Simulating a process stuck in a blocking system call

I'm trying to test a behaviour which is hard to reproduce in a controlled environment.
Use case:
Linux system; usually Redhat EL 5 or 6 (we're just starting with RHEL 7 and systemd, so it's currently out of scope).
There're situations where I need to restart a service. The script we use for stopping the service usually works quite well; it sends a SIGTERM to the process, which is designed to handle it; if the process doesn't handle the SIGTERM within a timeout (usually a couple of minutes) the script sends a SIGKILL, then waits a couple minutes more.
The problem is: in some (rare) situations, the process doesn't exit after a SIGKILL; this usually happens when it's badly stuck on a system call, possibly because of a kernel-level issue (corrupt filesystem, or not-working NFS filesystem, or something equally bad requiring manual intervention).
A bug arose when the script didn't realize that the "old" process hadn't actually exited and started a new process while the old was still running; we're fixing this with a stronger locking system (so that at least the new process doesn't start if the old is running), but I find it difficult to test the whole thing because I haven't found a way to simulate an hard-stuck process.
So, the question is:
How can I manually simulate a process that doesn't exit when sending a SIGKILL to it, even as a privileged user?
If your process are stuck doing I/O, You can simulate your situation in this way:
lvcreate -n lvtest -L 2G vgtest
mkfs.ext3 -m0 /dev/vgtest/lvtest
mount /dev/vgtest/lvtest /mnt
dmsetup suspend /dev/vgtest/lvtest && dd if=/dev/zero of=/mnt/file.img bs=1M count=2048 &
In this way the dd process will stuck waiting for IO and will ignore every signal, I know the signals aren't ignore in the latest kernel when processes are waiting for IO on nfs filesystem.
Well... How about just not sending SIGKILL? So your env will behave like it was sent, but the process didn't quit.
Once a proces is in "D" state (or TASK_UNINTERRUPTIBLE) in a kernel code path where the execution can not be interrupted while a task is processed, which means sending any signals to the process would not be useful and would be ignored.
This can be caused due to device driver getting too many interrupts from the hardware, getting too many incoming network packets, data from NIC firmware or blocked on a HDD performing I/O. Normally if this happens very quickly and threads remain in this state for very short span of time.
Therefore what you need to be doing is look at the syslog and sar reports during the time when the process was stuck in D-state. If you find stack traces in the log, try to search kernel.bugzilla.org for similar issues or seek support from the Linux vendor.
I would code the opposite way. Have your server process write its pid in e.g. /var/run/yourserver.pid (this is common practice). Have the starting script read that file and test that the process does not exist e.g. with kill of signal 0, or with
yourserver_pid=$(cat /var/run/yourserver.pid)
if [ -f /proc/$yourserver_pid/exe ]; then
You could improve that by readlink /proc/$yourserver_pid/exe and comparing that to /usr/bin/yourserver
BTW, having a process still alive a few seconds after a SIGKILL is a serious situation (the common case when it could happen is if the process is stuck in a D state, waiting for some NFS server), and you probably should detect and syslog it (e.g. with logger in your script).
I also would try to first send SIGTERM, wait a few seconds, send SIGQUIT, wait a few seconds, and at last send SIGKILL and only a few seconds later test that the server process has gone
A bug arose when the script didn't realize that the "old" process hadn't actually exited and started a new process while the old was still running;
This is the bug in the OS/kernel level, not in your service script. The situation is rare and is hard to simulate because the OS is supposed to kill the process when SIGKILL signal happens. So I guess your goal is to let your script work well under a buggy kernel. Is that correct?
You can attach gdb to the process, SIGKILL won't remove such process from processlist but it will flag it as zombie, which might still be acceptable for your purpose.
void#tahr:~$ ping 8.8.8.8 > /tmp/ping.log &
[1] 3770
void#tahr:~$ ps 3770
PID TTY STAT TIME COMMAND
3770 pts/13 S 0:00 ping 8.8.8.8
void#tahr:~$ sudo gdb -p 3770
...
(gdb)
Other terminal
void#tahr:~$ ps 3770
PID TTY STAT TIME COMMAND
3770 pts/13 t 0:00 ping 8.8.8.8
sudo kill -9 3770
...
void#tahr:~$ ps 3770
PID TTY STAT TIME COMMAND
3770 pts/13 Z 0:00 [ping] <defunct>
First terminal again
(gdb) quit

Difference between nohup vs at now

It seems that there is no difference between nohup and at now, but maybe there are subtleties?
The difference is that now runs a command that can respond to HUP signal, where as the nohup runs a command that is immune to HUP signal.
Ed Heal is right. But another difference is that something run by nohup still has a controlling terminal, whereas something run by at now does not.
In addition to that, backgrounding something with nohup causes it to run immediately, whereas at now simply queues something to be run the next time atrun(8) runs. In BSD unix, (FreeBSD/OpenBSD) at jobs are launched by atrun which is launched periodically by cron (or launchd in OSX). In Linux, at jobs are run by at's own daemon, atd, which by default launches jobs every 60 seconds.
Other flavours of unix may have different strategies, but in most cases you'll probably find that jobs launched by at now are less immediate than jobs launched using nohup.
nohup tells the system to continue running even after you log out.
at is used to execute a command or multiple commands once at some future time.

linux - running process background

I want to run a process in a remote linux server and keep that process alive after close the putty terminal,
what is the correct command?
You have two options:
Use GNU screen, which will allow you to run the command and detach it from your terminal, and later re-attach it to a different session. I use it for long-running processes whose output I want to be able to monitor at any time. Screen is a truly powerful tool and I would highly recommend spending some time to learn it.
Run the command as nohup some-command &, which will run the command in the background, detach it from the console, and redirect its output into nohup.out. It will swallow SIGHUPs that are sent to the process. (When you close the terminal or log out, SIGHUP is sent to all processes that were started by the login shell, and the default action the kernel will take is to kill the process off. This is why appending & to put the process in the background is not enough for it to survive a logout.)
don't use that nohup junk, i hate seeing that on servers; screen is a wasting pile of bits and rot -- use tmux.
if you want to background a process, double fork like every other daemon since the beginning of time:
# ((exec sleep 30)&)
# grep PPid /proc/`pgrep sleep`/status
PPid: 1
# jobs
# disown
bash: disown: current: no such job
enjoy.
The modern and easy to use approach that allows managing multiple processes and has a nice terminal UI is hapless utility.
Install with pip install hapless (or python3 -m pip install hapless) and just run
$ hap run my-command # e.g. hap run python my_long_running_script.py
$ hap status # check all the launched processes
See docs for more info.
A command launched enclosed in parenthesis
(command &)
will survive the death of the originating shell.

How to make sure an application keeps running on Linux

I'm trying to ensure a script remains running on a development server. It collates stats and provides a web service so it's supposed to persist, yet a few times a day, it dies off for unknown reasons. When we notice we just launch it again, but it's a pain in the rear and some users don't have permission (or the knowhow) to launch it up.
The programmer in me wants to spend a few hours getting to the bottom of the problem but the busy person in me thinks there must be an easy way to detect if an app is not running, and launch it again.
I know I could cron-script ps through grep:
ps -A | grep appname
But again, that's another hour of my life wasted on doing something that must already exist... Is there not a pre-made app that I can pass an executable (optionally with arguments) and that will keep a process running indefinitely?
In case it makes any difference, it's Ubuntu.
I have used a simple script with cron to make sure that the program is running. If it is not, then it will start it up. This may not be the perfect solution you are looking for, but it is simple and works rather well.
#!/bin/bash
#make-run.sh
#make sure a process is always running.
export DISPLAY=:0 #needed if you are running a simple gui app.
process=YourProcessName
makerun="/usr/bin/program"
if ps ax | grep -v grep | grep $process > /dev/null
then
exit
else
$makerun &
fi
exit
Then add a cron job every minute, or every 5 minutes.
Monit is perfect for this :)
You can write simple config files which tell monit to watch e.g. a TCP port, a PID file etc
monit will run a command you specify when the process it is monitoring is unavailable/using too much memory/is pegging the CPU for too long/etc. It will also pop out an email alert telling you what happened and whether it could do anything about it.
We use it to keep a load of our websites running while giving us early warning when something's going wrong.
-- Your faithful employee, Monit
Notice: Upstart is in maintenance mode and was abandoned by Ubuntu which uses systemd. One should check the systemd' manual for details how to write service definition.
Since you're using Ubuntu, you may be interested in Upstart, which has replaced the traditional sysV init. One key feature is that it can restart a service if it dies unexpectedly. Fedora has moved to upstart, and Debian is in experimental, so it may be worth looking into.
This may be overkill for this situation though, as a cron script will take 2 minutes to implement.
#!/bin/bash
if [[ ! `pidof -s yourapp` ]]; then
invoke-rc.d yourapp start
fi
If you are using a systemd-based distro such as Fedora and recent Ubuntu releases, you can use systemd's "Restart" capability for services. It can be setup as a system service or as a user service if it needs to be managed by, and run as, a particular user, which is more likely the case in OP's particular situation.
The Restart option takes one of no, on-success, on-failure, on-abnormal, on-watchdog, on-abort, or always.
To run it as a user, simply place a file like the following into ~/.config/systemd/user/something.service:
[Unit]
Description=Something
[Service]
ExecStart=/path/to/something
Restart=on-failure
[Install]
WantedBy=graphical.target
then:
systemctl --user daemon-reload
systemctl --user [status|start|stop|restart] something
No root privilege / modification of system files needed, no cron jobs needed, nothing to install, flexible as hell (see all the related service options in the documentation).
See also https://wiki.archlinux.org/index.php/Systemd/User for more information about using the per-user systemd instance.
I have used from cron "killall -0 programname || /etc/init.d/programname start". kill will error if the process doesn't exist. If it does exist, it'll deliver a null signal to the process (which the kernel will ignore and not bother passing on.)
This idiom is simple to remember (IMHO). Generally I use this while I'm still trying to discover why the service itself is failing. IMHO a program shouldn't just disappear unexpectedly :)
Put your run in a loop- so when it exits, it runs again... while(true){ run my app.. }
I couldn't get Chris Wendt solution to work for some reason, and it was hard to debug. This one is pretty much the same but easier to debug, excludes bash from the pattern matching. To debug just run: bash ./root/makerun-mysql.sh. In the following example with mysql-server just replace the value of the variables for process and makerun for your process.
Create a BASH-script like this (nano /root/makerun-mysql.sh):
#!/bin/bash
process="mysql"
makerun="/etc/init.d/mysql restart"
if ps ax | grep -v grep | grep -v bash | grep --quiet $process
then
printf "Process '%s' is running.\n" "$process"
exit
else
printf "Starting process '%s' with command '%s'.\n" "$process" "$makerun"
$makerun
fi
exit
Make sure it's executable by adding proper file permissions (i.e. chmod 700 /root/makerun-mysql.sh)
Then add this to your crontab (crontab -e):
# Keep processes running every 5 minutes
*/5 * * * * bash /root/makerun-mysql.sh
The supervise tool from daemontools would be my preference - but then everything Dan J Bernstein writes is my preference :)
http://cr.yp.to/daemontools/supervise.html
You have to create a particular directory structure for your application startup script, but it's very simple to use.
first of all, how do you start this app? Does it fork itself to the background? Is it started with nohup .. & etc? If it's the latter, check why it died in nohup.out, if it's the first, build logging.
As for your main question: you could cron it, or run another process on the background (not the best choice) and use pidof in a bashscript, easy enough:
if [ `pidof -s app` -eq 0 ]; then
nohup app &
fi
You could make it a service launched from inittab (although some Linuxes have moved on to something newer in /etc/event.d). These built in systems make sure your service keeps running without writing your own scripts or installing something new.
It's a job for a DMD (daemon monitoring daemon). there are a few around; but I usually just write a script that checks if the daemon is running, and run if not, and put it in cron to run every minute.
Check out 'nanny' referenced in Chapter 9 (p197 or thereabouts) of "Unix Hater's Handbook" (one of several sources for the book in PDF).
A nice, simple way to do this is as follows:
Write your server to die if it can't listen on the port it expects
Set a cronjob to try to launch your server every minute
If it isn't running it'll start, and if it is running it won't. In any case, your server will always be up.
I think a better solution is if you test the function, too. For example, if you had to test an apache, it is not enough only to test, if "apache" processes on the systems exist.
If you want to test if apache OK is, then try to download a simple web page, and test if your unique code is in the output.
If not, kill the apache with -9 and then do a restart. And send a mail to the root (which is a forwarded mail address to the roots of the company/server/project).
It's even simplier:
#!/bin/bash
export DISPLAY=:0
process=processname
makerun="/usr/bin/processname"
if ! pgrep $process > /dev/null
then
$makerun &
fi
You have to remember though to make sure processname is unique.
One can install minutely monitoring cronjob like this:
crontab -l > crontab;echo -e '* * * * * export DISPLAY=":0.0" && for
app in "eiskaltdcpp-qt" "transmission-gtk" "nicotine";do ps aux|grep
-v grep|grep "$app";done||"$app" &' >> crontab;crontab crontab
disadvantage is that the app names you enter have to be found in ps aux|grep "appname" output and at same time being able to be launched using that name: "appname" &
also you can use the pm2 library.
sudo apt-get pm2
And if its a node app can install.
Sudo npm install pm2 -g
them can run the service.
linux service:
sudo pm2 start [service_name]
npm service app:
pm2 start index.js

Resources