Exclude all permission denied messages from "du" - linux

I am trying to evaluate the disk usage of a number of Unix user accounts.
Simply, I am using the following command:
du -cBM --max-depth=1 | sort -n
But I’ve seen many error message like below. How can I exclude all such “Permission denied” messages from display?
du: `./james/.gnome2': Permission denied
My request could be very similar to the following list, by replacing “find” to “du”.
How can I exclude all "permission denied" messages from "find"?
The following thread does not work. I guess I am using bash.
Excluding hidden files from du command output with --exclude, grep -v or sed

du -cBM --max-depth=1 2>/dev/null | sort -n
or better in bash (just filter out this particular error, not all like last snippet)
du -cBM --max-depth=1 2> >(grep -v 'Permission denied') | sort -n

2> /dev/nul hides only error messages.
the command du always try run over directory. Imagine that you have thousands of dirs?
du needs eval, if you have persmission run if not, follow with the next dir...

I'd use something concise that excludes only the lines you don't want to see. Redirect stderr to stdout, and grep to exclude all "denied"s:
du -cBM --max-depth=1 2>&1 | grep -v 'denied' | sort -n

To remove all errors coming from the du command i used this:
du -sh 2>&1 | grep -v '^du:'

If 2>/dev/null does not work, probably the shell you are using is not bash.
To check what shell you are using, you may try ps -p $$ (see https://askubuntu.com/a/590903/130162 )

you can pipe it to a temporary file, like -
du ... > temp_file
Errors get printed on the terminal and only disk usage information gets printed into the temp_file.

Related

cat: pid.txt: No such file or directory

I have a problem with cat. I want to write script doing the same thing as ps -e. In pid.txt i have PID of running processes.
ls /proc/ | grep -o "[0-9]" | sort -h > pid.txt
Then i want use $line like a part of path to cmdline for evry PID.
cat pid.txt | while read line; do cat /proc/$line/cmdline; done
i try for loop too
for id in 'ls /proc/ | grep -o "[0-9]\+" | sort -h'; do
cat /proc/$id/cmdline;
done
Don't know what i'm doing wrong. Thanks in advance.
I think what you're after is this - there were a few flaws with all of your approaches (or did you really just want to look at process with a single-digit PID?):
for pid in $(ls /proc/ | grep -E '^[0-9]+$'|sort -h); do cat /proc/${pid}/cmdline; tr '\x00' '\n'; done
You seem to be in a different current directory when running cat pid.txt... command compared to when you ran your ls... command. Run both your commands on the same terminal window, or use absolute path, like /path/to/pid.txt
Other than your error, you might wanna remove -o from your grep command as it gives you 1 digit for a matching pid. For example, you get 2 when pid is 423. #Roadowl also pointed that already.

Regarding the use of xargs in find command

I have one scenario where I need to select all files having aliencoders.numeric-digits
like alienoders.1206
and find command should search all subdirectories too. If there is no such file it should not do anything.
I wrote :
find /home/jassi/ -name "aliencoders.[0-9]+" | xargs ls -lrt | awk print '$9'
Bit it says no such file or directory if there no such file starts with aliencoders.xx...
How I can by pass this error. I have to run it for several such directories and it should give output for those directories only in which such file pattern exists else no warning and doesn't do xargs etc stuffs.
Currently, if no such file is there then it is taking current directory t find instead of /home/jassi
If you don't want xargs to execute if the input is empty you can use -r or --no-run-if-empty which are GNU extensions as pointed out in the man page. So if you have that support, you can try
find /home/jassi/ -name "aliencoders.[0-9]+" | xargs -r ls -lrt | awk '{print $9}'
Alternatively you can make use of exec option with find to achieve this something on these lines
find /home/jassi/ -name "aliencoders.[0-9]+" -exec ls -lrt {} + | awk '{print $9}'
Hope this helps!
bash:
Try this command (under the bash shell since most people use it, and no shell was specified):
find /home/jassi/ -name "aliencoders.[0-9]+" 2>&1 | xargs ls -lrt | awk '{print $9}'
With 2>&1 you are redirecting the error messages from stderr to stdout. Once you have this single output stream, you can process it with your pipes etc.
Without this redirection, your error messages to stderr would have continued to go to the console, creating clutter in the output and only stdout was getting processed by the pipes.
All about redirection will give you more details and control about redirection.
UPDATE:
tcsh:
Based on use of the tcsh (sometimes I think I'm the only one using it), it is possible to redirect stderr to stdout with the following construct:
command |& ...
so
find /home/jassi/ -name "aliencoders.[0-9]+" |& xargs ls -lrt | awk '{print $9}'
should help. Before your error messages were bypassing your pipes and going directly to the console, only stdout was getting processed by your pipes. With this all of your output goes through the pipes and you can filter out what you want.
Also, note the use of the awk command at the end of the command:
awk '{print $9}'
is different from what you posted originally.

xargs can't get user input?

i have a sample code like this:
CMD="svn up blablabla | grep -v .tgz"
echo $CMD | xargs -n -P ${PARALLEL:=20} -- bash -c
the purpose is to run svn update in parallel. However when encounter the conflicts, which should prompt out several selection for users to choose, it just passes without waiting for user input. And an error is shown:
Conflict discovered in 'blablabla'.
Select: (p) postpone, (df) diff-full, (e) edit,
(mc) mine-conflict, (tc) theirs-conflict,
(s) show all options: svn: Can't read stdin: End of file found
Is there any way to fix this?
Thanks
Yes, there is a way to fix this! See the answer to how to prompt a user from a script run with xargs. Long story short, use
xargs -a FILENAME your_script
or
xargs -a <(cat FILENAME) your_script
The first version actually reads lines from a file, and the second one fakes reading lines from a file, which is convenient for using xargs in pipe chains with awk or perl. Remember to use the -0 flag if you don't want to break on whitespace!
Another solution, which doesn't rely on Bash but on GNU's flavor of xargs, is to use the -o or --open-tty option:
echo $CMD | xargs -n -P ${PARALLEL:=20} --open-tty -- bash -c
From the manpage:
-o, --open-tty
Reopen stdin as /dev/tty in the child process before executing the command. This is use‐
ful if you want xargs to run an interactive application.

linux command grep -is "abc" filename|wc -l

what does the s mean there and also when pipe into wc what is that for? I know it eventually count the number of abc appeared in file filename, but not sure about the option s for and also pipe to wc mean
linux command grep -is "abc" filename|wc -l
output
47
-s means "suppress error messages about unreadable files" and the pipe to wc means "take the output and send it to the wc -l command" which effectively counts the number of lines matched. You can accomplish the same with the -c option to grep: grep -isc "abc" filename
Consider,
command_1 | command_2
Role of the pipe is that- it takes output of command written before it (command_1 here) and supplies that output to the command written after it (command_2 here).
The man page has everything you would want to know about the options for grep:
-s, --no-messages
Suppress error messages about nonexistent or unreadable files.
Portability note: unlike GNU grep, traditional grep did not con-
form to POSIX.2, because traditional grep lacked a -q option and
its -s option behaved like GNU grep's -q option. Shell scripts
intended to be portable to traditional grep should avoid both -q
and -s and should redirect output to /dev/null instead.
The pipe to wc -l is what gives you the count of how many lines the string "abc" appeared on. It isn't necessarily the number of times the string appeared in the file since one line with multiple occurrences is going to be counted as only 1.
grep man page says:
-s, --no-messages suppress error messages
grep returns the lines that have abc (case insensitive) in them. You pipe them to wc to get a count of the number of lines.
From man grep:
-s, --no-messages
Suppress error messages about nonexistent or unreadable files.
The wc command counts line, words and characters. With -l it returns the number of lines.

Identify the files opened a particular process on linux

I need a script to identify the files opened a particular process on linux
To identify fd :
>cd /proc/<PID>/fd; ls |wc –l
I expect to see a list of numbers which is the list of files descriptors' number using in the process. Please show me how to see all the files using in that process.
Thanks.
The command you probably want to use is lsof. This is a better idea than digging in /proc, since the command is a more clear and a more stable way to get system information.
lsof -p pid
However, if you're interested in /proc stuff, you may notice that files /proc/<pid>/fd/x is a symlink to the file it's associated with. You can read the symlink value with readlink command. For example, this shows the terminal stdin is bound to:
$ readlink /proc/self/fd/0
/dev/pts/43
or, to get all files for some process,
ls /proc/<pid>/fd/* | xargs -L 1 readlink
While lsof is nice you can just do:
ls -l /proc/pidoftheproces/fd
lsof -p <pid number here> | wc -l
if you don't have lsof, you can do roughly the same using just /proc
eg
$ pid=1825
$ ls -1 /proc/$pid/fd/*
$ awk '!/\[/&&$6{_[$6]++}END{for(i in _)print i}' /proc/$pid/maps
You need lsof. To get the PID of the application which opened foo.txt:
lsof | grep foo.txt | awk -F\ '{print $2}'
or what Macmede said to do the opposite (list files opened by a process).
lsof | grep processName

Resources