Please see this image:
The terminal always keeps displaying number of files 1. the frequency is about several minutes each. Restarting OS(to me it's centos) doesn't help because I've been seen it for months. Though it doesn't affect other processes, it harasses the terminal and I have to press CTRL+C to stop it tempararily, and I'm worried some background process is always in wrong state. Does it have anything to do with my command to dispaly gui folders needed at work?
nautilus -q &> /dev/null
nautilus dir1 dir2 .. dirn &> /dev/null &
#can prevent the 'number of files 1'.
I've googled the keyword 'number of files 1' but none of the results seem to be related to this question and so I'm wondering if others met the same issue before.
Could you give some suggestions on how to debug and resolve this issue?
[root#localhost cp2vm]# whoami
root
[root#localhost cp2vm]# uname -a
Linux localhost.localdomain 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
output of strings /usr/bin/nautilus:
http://www.filedropper.com/stringsnautilus
nautilus --version output: GNOME nautilus 3.22.3
Just run nautilus &> /dev/null to avoid nautilus polluting stdout and stderr. This way your terminal won't show those messages.
Edit:
To make it explicit, this should work in your script.
nautilus -q &> /dev/null # Exits all nautilus instances, ignore output
nautilus dir1 dir2 dir3 &> /dev/null # Runs nautilus, ignore output
I doubt the output redirection is useful for nautilus -q, but from your information it's hard to understand when and how often you call that script. So it might be in surplus, but won't harm.
Related
I was looking for a way to route the output of my shell scripts to syslog, and found this article, which suggests putting the following line at the top of the script:
exec 1> >(logger -s -t $(basename $0)) 2>&1
I've tried this with the following simple script:
#!/bin/bash
exec 1> >(logger -s -t $(basename $0)) 2>&1
echo "testing"
exit 0
When I run this script from the shell, I do indeed get the message in the syslog, but the script doesn't seem to return--in order to continue interacting with the shell, I need to hit Enter or send a SIGINT signal. What's going on here? FWIW, I'm mostly using this to log the results of cron jobs, so in the wild I probably don't need it to work properly in an interactive shell session, but I'm nervous using something I don't really understand in production. I am mostly worried about spawning a bunch of processes that don't terminate cleanly.
I've tested this on Ubuntu 15.10, Ubuntu 16.04, and OSX, all with the same result.
Cutting a long story short: the shell script does exit and so does the logger — there isn't actually a problem — but the output from the logger lead to confusion.
Converting comments into an answer.
Superficially, given the symptoms you describe, what's going on is that Bash isn't exiting until all its child processes exit. You could try exec >/dev/null 2>&1 before exit 0 to see if that stops the logger — basically, the redirection closes its inputs, so it should terminate, allowing the script to exit.
However, when I try your script (bash logtest.sh) on macOS Sierra 10.12.2 (though I'd not expect it to change in earlier versions), the command exits promptly and produces a log message on the terminal like this (I use Osiris JL: as my prompt):
Osiris JL: bash logtest.sh
Osiris JL: Dec 26 12:23:50 logtest.sh[6623] <Notice>: testing
Osiris JL: ps
PID TTY TIME CMD
71792 ttys000 0:00.25 -bash
534 ttys002 0:00.57 -bash
543 ttys003 0:01.71 -bash
558 ttys004 0:00.44 -bash
Osiris JL:
I hit return on the blank line and got the prompt before the ps command.
Note that the message from logger arrived after the prompt.
When I ran bash logtest.sh (where logtest.sh contained your script), the only key I hit was the return to enter the command (which the shell read before running the command). I then got a prompt, the output from logger, and a blank line with the terminal waiting for input. That's normal. The logger was not still running — I could check that in other windows.
Try typing ls instead of just hitting return. The shell is waiting for input. It wrote its prompt, but the logger output confused the on-screen layout. For me, I got:
Osiris JL: bash logtest.sh
Osiris JL: Dec 26 13:28:28 logtest.sh[7133] <Notice>: testing
ls
README.md ix37.sql mq13.c sh11.o
Safe lib mq13.dSYM so-4018-8770
Untracked ll89 oddascevendesc so-4018-8770.c
ci11 ll89.cpp oddascevendesc.c so-4018-8770.dSYM
ci11.c ll89.dSYM oddascevendesc.dSYM sops
ci11.dSYM ll97 rav73 src
data ll97.c rav73.c tf17
doc ll97.dSYM rav73.dSYM tf17.cpp
es.se-36764 logtest.sh rd11 tf17.dSYM
etc mac-clock-get-time rd11.c tf19
fa37.sh mac-clock-get-time.c rd11.dSYM tf19.c
fileswap.sh mac-clock-get-time.dSYM rn53 tf19.dSYM
gm11 makefile rn53.c x-paste.c
gm11.c matrot13 rn53.dSYM xc19
gm11.dSYM matrot13.c sh11 xc19.c
inc matrot13.dSYM sh11.c xc19.dSYM
infile mq13 sh11.dSYM
Osiris JL:
I am having troubles saving the output of "mtr --report-wide" to a textfile. Probably due to the different way both options output their information. I know i could use the "--raw" argument but i would like to avoid that.
Does anybody have a solution?
Linux version:
Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.63-2+deb7u1 x86_64 GNU/Linux
Works:
"nohup mtr --report destination --report-cycles=10 > output &"
Does not work(process never stops):
"nohup mtr --report-wide destination --report-cycles=10 > output &"
process never stops
Quite the contrary - the process is stopped immediately due to a SIGTTOU signal, and thus never terminates.
solution?
Just redirect STDERR also by using … >&output& instead of … >output&.
I recently upgraded from CentOS 5.8 (with GNU bash 3.2.25) to CentOS 6.5 (with GNU bash 4.1.2). A command that used to work with CentOS 5.8 no longer works with CentOS 6.5. It is a silly example with an easy workaround, but I am trying to understand what is going on underneath the bash hood that is causing the different behavior. Maybe it is a new bug in bash 4.1.2 or an old bug that was fixed and the new behavior is expected?
CentOS 5.8:
(echo "hi" > /dev/stdout) > test.txt
echo $?
0
cat test.txt
hi
CentOS 6.5:
(echo "hi" > /dev/stdout) > test.txt
-bash: /dev/stdout: Not a directory
echo $?
1
Update: It doesn't look like this is problem related to CentOS version. I have another CentOS 6.5 machine where the command works. I have eliminated any environment variables as the culprit. Any ideas?
On all the machines these commands gives the same output:
ls -ld /dev/stdout
lrwxrwxrwx 1 root root 15 Apr 30 13:30 /dev/stdout -> /proc/self/fd/1
ls -lL /dev/stdout
crw--w---- 1 user1 tty 136, 0 Oct 28 23:21 /dev/stdout
Another Update: It seems the sub-shell is inheriting the redirected stdout of the parent shell. The is not too surprising I guess, but still why does it work on one machine, but fail on the other machine when they are running the same bash version?
On the working machine:
((ls -la /dev/stdout; ls -la /proc/self/fd/1) >/dev/stdout) > test.txt
cat test.txt
lrwxrwxrwx 1 root root 15 Aug 13 08:14 /dev/stdout -> /proc/self/fd/1
l-wx------ 1 user1 aladdin 64 Oct 29 06:54 /proc/self/fd/1 -> /home/user1/test.txt
I think Yu Huang is right, redirecting to /tmp works on both machines. Both machines are using isilon NAS for the /home mount, but probably one has slightly different filesystem version or configuration that caused the error. In conclusion, redirecting to /dev/stdout should be avoided unless you know the parent process will not redirecting it.
UPDATE: This problem arose after upgrade to NFS v4 from v3. After downgrading back to v3 this behavior went away.
Good morning, user1999165, :)
I suspect it's related to the underlying filesystem. On the same machine, try:
(echo "hi" > /dev/stdout) > /tmp/test.txt
/tmp/ should be linux native (ext3 or something) filesystem
On many Linux systems, /dev/stdout is an alias (link or similar) for file descriptor 1 of the current process. When you look at it from C, then the global stdout is connected to file descriptor 1.
That means echo foo > /dev/stdout is the same as echo foo 1>&1 or a redirect of a file descriptor to itself. I wouldn't expect this to work since the semantics are "close descriptor to redirect and then clone the new target". So to make it work, there must be special code which notices that the two file descriptors are actually the same and which skips the "close" step.
My guess is that on the system where it fails, BASH isn't able to figure out /dev/stdout == fd1 and actually closes it. The error message is weird, though. OTOH, I don't know any other common error which would fit better.
Note: I tried to replicate your problem on Kubuntu 14.04 with BASH 4.3.11 and here, the redirect works (i.e. I don't get an error). Maybe it's a bug in BASH 4.1 which was fixed, since.
I was seeing issues writing piped stdin input to AWS EFS (NFSV4) that paralleled this issue. (Using Centos 6.8 so unfortunately cannot upgrade bash to 4.2).
I asked AWS support about this, here's their response --
This problem is not related to EFS itself, the problem here is with bash. This issue was fixed in bash 4.2 or later in RHEL.
To avoid this problem, please, try to create a file handle before running the echo command
within a subshell, after that the same file handler can be used as a redirect. Like the below example:
exec 5> test.txt; (echo "hi" >&5); cat test.txt
hi
Apparently I've done something strange/wrong in a tcsh shell, and now whenever I start an application in the background which prints to stdout the application is suspended (stopped). Weird thing is, this behavior only happens in this terminal; if I do the same in another terminal, the application just keeps running in the background and prints it output to the terminal.
In the "broken" terminal I have to put the suspended application back into foreground (with fg) to have it continue.
Example:
thehost:/tmp/test1(277)> ls -l &
[3] 1454
thehost:/tmp/test1(278)>
[3] + Suspended (tty output) ls --color=auto -l
thehost:/tmp/test1(278)> fg
ls --color=auto -l
total 0
thehost:/tmp/test1(279)>
Same command executed in another terminal works fine:
thehost:/tmp/test1(8)> ls -l &
[1] 2280
thehost:/tmp/test1(9)> total 0
[1] Done ls --color=auto -l
thehost:/tmp/test1(9)>
Starting a bash in the affected terminal doesn't solve this either:
thehost:/tmp/test1(280)> bash
oliver#thehost:/tmp/test1$ ls -l &
[1] 2263
oliver#thehost:/tmp/test1$
[1]+ Stopped ls --color=auto -l
oliver#thehost:/tmp/test1$ fg
ls --color=auto -l
total 0
oliver#thehost:/tmp/test1$
Getting a new login shell (with su - oliver) doesn't solve this either.
So: what did I do in this terminal to get this behavior, and what can I do to get back the normal behavior? It's not really an important problem (I could close the terminal and open a new one), but I'm curious :-)
Happens on Linux RHEL 6.4 64bit, with KDE 4.11.5 and Konsole 2.11.3, and tcsh 6.17.00.
This will fix it:
stty -tostop
From the man page:
tostop (-tostop)
Send (do not send) SIGTTOU for background output. This causes background jobs to stop if they attempt terminal output.
This tostop is normally the default setting, as it's usually undesirable to mix the output of multiple jobs. So most people just want the foreground job to be able to print to the terminal.
For zsh you can use:
nohup ls -l 2>/dev/null &
so nohup [command] 2>/dev/null &
Hope that helps
everyone :
I need ssh to connect remote linux ,I already know how to run and display a GUI program in remote linux .It can be done by :
ssh username#ip
export DISPLAY=:0.0
firefox &
however ,my target linux hasn't X Window System , I need display the execution result in remote linux's screen ,for example :
my pc is A , remote pc is B
A use ssh access B , after connected to B ,I type ls in A then press enter
the execution result should display in B's screen (tty or whatever , I don't know what it should be called)
any idea? thanks for your help.
Basic idea:
a$ ssh user#b
b$ run-program >/dev/console
(I use a$ and b$ to indicate the shell prompts on A and B respectively.)
Problem with this:
b$ ls -l /dev/console
crw------- 1 root root 5, 1 Mar 19 09:10 /dev/console
Only root can write to /dev/console.
Possible workaround:
$b run-program | sudo tee /dev/console >/dev/null
(Redirecting to /dev/null here prevents the output from showing up on your screen as well.)
This does depend on user#b being allowed to run sudo tee /dev/console.
If you are sysadmin for B and user#b is not allowed to run sudo tee /dev/console, read man 5 sudoers and man 8 visudo to find out how to give user#b this permission.
If you are not sysadmin for B and user#b is not allowed to run sudo tee /dev/console, you will have to ask B's sysadmin to set this up for you.