Linux - Get files/directories listing and username who owns it - linux

I am creating report for usage of shared Linux nfs mount point and sending via email. I am using du Linux command to get directory size but I need also to have information about who own these files. Is there a way to get information like this:
SIZE[MB] FILENAME OWNER
631746 /logs1 user1
372477 /logs2 user2
372477 /data3 user1
191846 /data2 user2
Thank You

Not possible with du. Try writing some custom bash script

Look into combining your commands with ls -l which will give user/group and other information, then pipe it to sed or cut for example and pipe on to output.

Something along the lines of
for i in ./* # or however you get your filenames
do
printf '%s\t%s\n' "$(du -hs "$i")" "$(ls -ld "$i" | awk '{print $3}')"
done
will do it. Though the columns to be nicely aligned in (most/all) cases, it would require some more work.

Related

Awk script which takes as input the path to a directory and displays all the files whose size are more than limit?

I'm fairly new to linux and awk. I want to display all files whose size are more than (eg 3Kb) and where those files are found within a directory whose path is specified by the user.
I managed to do it by "hard-coding" the path in the terminal like this :
ls -l /home/user/Documents | ./testScript
testScript contains:
#!/bin/bash -f
awk '
BEGIN{
if($5>3000){
print $9
}
}
'
How do I do this with the user specifying a directory path?
It would be easier to use find, than a combination of ls and a script
find PATH_TO_DIRECTORY -size +10k
you can make it a bash function taking a parameter
Posting an answer as I can't comment for lack of reputation points:
Not sure what you mean by "path specified by user" but assuming you can read the path in some variable then just do this in your code:
ls -l $mypath | ./testScript
altagir's find is a better solution, but in the cases where someone wants to use this general structure but doesn't know a way other than ls to do it:
stat -c "%s %n" "$someDir"/* | awk -v max=$maxval '$1 > max { print $2 }'

How can i search for a specific string after an ls command

I am trying to make a shell script that searches the /proc directory and finds the user of the process. how am i supposed to do that? i am trying different ways with grep like
NUM=`egrep -ri "$a1" /proc/ 2> error.txt `
echo "$NUM"
but nothing happen.Can someone help?
grep searches the contents of files. To search files owned by someone, try something like
find /proc -user someone -ls
Trivially, you could pipe ls through a filter to obtain only those lines where the third field is someone, as hinted at in comments;
ls -l | awk '$3=="someone"'
... but for many various reasons you should avoid parsing ls output in general.
If you know the numeric user ID, you could do something like
awk '$1=="Uid" && $2==1234 { print FILENAME }' /proc[1-9]*/status
to obtain the file names of the processes owned by UID number 1234. Something very similar could be obtained by find, as in the first example. (Maybe restrict search to only numeric directory names?)
To find the userid of a process via the /proc pseudo-filesystem, grab the "Uid" line from the "status" file:
uid_line=`grep -e ^Uid /proc/1/status`
You can parse that apart into real, effective, saved set, and filesystem uids. The second field is the real uid:
real_uid=`echo "$uid_line"|cut -f 2`
You can also convert uid into username using getent:
username=`getent passwd $real_uid|cut -d: -f 1`
I am pretty sure userids for processes are in /proc/PID/loginid.

Why can't I run my shell script to list users?

users='awk '{print $1}' /etc/passwd | sort -u'
for user in $users
do
echo " - $user"
done
this is my shell script . Problem is that show's an error.
the error is ---> users: command not found
please give me the solution frinds
With the code the way it is now I see that you're not assigning the output of the awk|sort command to the variable (maybe you wanted to use ` instead of ' ?)
This works:
#!/bin/bash
users=$(awk '{print $1}' /etc/passwd | sort -u)
for user in $users
do
echo " - $user"
done
Although you should be aware that /etc/passwd is not separated by spaces, so awk '{print $1}' won't give you the user's name (which maybe is what you wanted)
Edit:
As per #Andy Lester's comment to your question: If you save this code in a file (let's say /tmp/myscript.bash) to run it you have to type in a terminal:
/bin/bash /tmp/myscript.bash
or, since it starts with #!/bin/bash (read here) you could make it executable (using chmod u+x /tmp/myscript.bash) and then call it, just typing /tmp/myscript.bash. You can also save it in one of the PATH directories (type echo $PATH to see which are they), make it executable and then you'll be able to call it from anywhere, but I don't really recommend doing that because you may end up overwriting juicy system's commands if you're not careful. For instance, let's say you call your script with the unfortunate name of ls, save it in the first directory of the $PATH (in my case, /usr/local/sbin) Every time you type ls, you won't be listing directories, but calling your script... Which is bad.

shell script to download latest file from FTP

I am writing shell script first time, I want to download latest create file from FTP.
I want to download latest file of specific folder. Below is my code for that. But it is downloading all the files of the folder not the latest one.
ftp -in ftp.abc.com << SCRIPTEND
user xyz xyz
binary
cd Rpts/
mget ls -t -r | tail -n 1
quit
SCRIPTEND
help me with this, please?
Try using wget or lftp utility instead, it compares file time/date and AFAIR its purpose is ftp scripting. Switch to ssh/rsync if possible, you can read a bit about lftp instead of rsync here:
https://serverfault.com/questions/24622/how-to-use-rsync-over-ftp
Probably the easiest way is to link last version on server side to "current", and always get the file pointed. If you're not admin of the server, you need to list all files with date/time, grab the information, parse it, decide which one is newest, in the meantime state on the server can change, and you find yourself in more complicated solution than it's worth.
The point is, that "ls" sorts output in some way, and time may not be default. There are switches to sort it e.g. base on modification time, however even when server responds with OK on ls -t , you can't be sure it really supports sorting, it can just ignore all switches and always return the same list, that's why admins usually use "current" link (ln -s). If there's no "current", to make sure you have the right file, you need to parse list anyway ( ls -al ).
http://www.catb.org/esr/writings/unix-koans/shell-tools.html
Looking at the code, the line
mget ls -t -r | tail -n 1
doesn't do what you think. It actually grabs all of the output of ls -t and then tail processes the output of mget. You could replace this line with
mget $(ls -t -r | tail -n 1)
but I am not sure if ftp will support such a call...
Try using an FTP client other than ftp. For example, curlftpfs available at curlftpfs.sourceforge.net is a good candidate as it allows you to mount an FTP to a directory as if it is a local folder and then run different commands on the files there (including find, grep, etc.). Take a look at this article.
This way, since the output comes form a local command, you'd be more certain that ls -t returns a properly sorted list.
Btw, it's a bit less convoluted to use ls -t | head -1 than ls -t -r | tail -1. They produce the same result but why reverse and grab from the tail when you can just grab the head :)
If you use curlftpfs then your script would be something like this (assuming server ftp.abc.com and user xyz with password xyz).
mkdir /tmp/ftpsession
curlftpfs ftp://xyz:xyz#ftp.abc.com /tmp/ftpsession
cd /tmp/ftpsession/Rpts
cp -Rpf $(ls -t | head -1) /your/destination/folder/or/file
cd -
umount /tmp/ftpsession
My Solution is this:
curl 'ftp://server.de/dir/'$(curl 'ftp://server.de/dir/' 2>/dev/null | tail -1 | awk '{print $(NF)}')

How do I list one filename per output line in Linux?

I'm using ls -a command to get the file names in a directory, but the output is in a single line.
Like this:
. .. .bash_history .ssh updater_error_log.txt
I need a built-in alternative to get filenames, each on a new line, like this:
.
..
.bash_history
.ssh
updater_error_log.txt
Use the -1 option (note this is a "one" digit, not a lowercase letter "L"), like this:
ls -1a
First, though, make sure your ls supports -1. GNU coreutils (installed on standard Linux systems) and Solaris do; but if in doubt, use man ls or ls --help or check the documentation. E.g.:
$ man ls
...
-1 list one file per line. Avoid '\n' with -q or -b
Yes, you can easily make ls output one filename per line:
ls -a | cat
Explanation: The command ls senses if the output is to a terminal or to a file or pipe and adjusts accordingly.
So, if you pipe ls -a to python it should work without any special measures.
Ls is designed for human consumption, and you should not parse its output.
In shell scripts, there are a few cases where parsing the output of ls does work is the simplest way of achieving the desired effect. Since ls might mangle non-ASCII and control characters in file names, these cases are a subset of those that do not require obtaining a file name from ls.
In python, there is absolutely no reason to invoke ls. Python has all of ls's functionality built-in. Use os.listdir to list the contents of a directory and os.stat or os to obtain file metadata. Other functions in the os modules are likely to be relevant to your problem as well.
If you're accessing remote files over ssh, a reasonably robust way of listing file names is through sftp:
echo ls -1 | sftp remote-site:dir
This prints one file name per line, and unlike the ls utility, sftp does not mangle nonprintable characters. You will still not be able to reliably list directories where a file name contains a newline, but that's rarely done (remember this as a potential security issue, not a usability issue).
In python (beware that shell metacharacters must be escapes in remote_dir):
command_line = "echo ls -1 | sftp " + remote_site + ":" + remote_dir
remote_files = os.popen(command_line).read().split("\n")
For more complex interactions, look up sftp's batch mode in the documentation.
On some systems (Linux, Mac OS X, perhaps some other unices, but definitely not Windows), a different approach is to mount a remote filesystem through ssh with sshfs, and then work locally.
you can use ls -1
ls -l will also do the work
You can also use ls -w1
This allows to set number of columns.
From manpage of ls:
-w, --width=COLS
set output width to COLS. 0 means no limit
ls | tr "" "\n"
Easy, as long as your filenames don't include newlines:
find . -maxdepth 1
If you're piping this into another command, you should probably prefer to separate your filenames by null bytes, rather than newlines, since null bytes cannot occur in a filename (but newlines may):
find . -maxdepth 1 -print0
Printing that on a terminal will probably display as one line, because null bytes are not normally printed. Some programs may need a specific option to handle null-delimited input, such as sort's -z. Your own script similarly would need to account for this.
-1 switch is the obvious way of doing it but just to mention, another option is using echo and a command substitution within a double quote which retains the white-spaces(here \n):
echo "$(ls)"
Also how ls command behaves is mentioned here:
If standard output is a terminal, the output is in columns (sorted
vertically) and control characters are output as question marks;
otherwise, the output is listed one per line and control characters
are output as-is.
Now you see why redirecting or piping outputs one per line.

Resources