How can I extract only an inode number for a file in Linux? - linux

just a quick question. I want to use my inode number in my bash script however, I need some help.
I'm using command ls -i "filename", which echoes "inode number" "filename". The problem is, I only need inode number. Is there a way, where i can "slice" the output?

You can use stat command with %i to get only inode nuymbers:
stat -c '%i' *
Most of the utilities/commands use to get inode use lstat system call which is the one used by stat command too.

you can also awk over the ls
-bash-3.2$ ls -i current.py | awk '{print $1}'

Related

How to get file size in byte using wc -c

when I use the command to get the file size in bye using wc -c the command return two values, the size in byte and the file name, ex:
the output for wc -c my_file is 322 my_file
I need to get only the first value to use it if condition, and I need to use this specific command not any other one..
Any help please, thank you.
Redirect stdin and wc wont know or echo the file name
wc -c < my_file
This can also be done without redirection or wastefully reading the whole file using stat:
stat -c %s my_file

Get size of image in bash

I want to get size of image. The image is in folder by name encodedImage.jpc
a="$(ls -s encodedImage.jpc | cut -d " " -f 1)"
temp="$(( $a*1024 * 8))"
echo "$a"
The output is not correct. How to get size? Thank You
Better than parsing ls output, the proper way is to use the command stat like this :
stat -c '%s' file
Check
man stat | less +/'total size, in bytes'
If by size you mean bytes or pretty bytes can you just use
ls -lh
-h When used with the -l option, use unit suffixes: Byte, Kilobyte, Megabyte, Gigabyte, Terabyte and Petabyte in order to reduce the number of digits to three or less using base 2 for sizes.
I guess the more complete answer if you're just trying to tear off the file size alone (I added the file name as well you can remove ,$9 to drop it)
ls -lh | awk '{print $5,$9}'
U can use this command
du -sh your_file

Linux CP using AWK output

I have been trying to learn more about Linux and have spent this morning focusing on the awk command. the command I have been trying to get to work is below.
ls -lRt lpftp.* | awk '{print $7, $9}' | mkdir -p $(awk '{print $1}') | ls -lRt lpftp.* | cp $(awk '{print $9, $7}')
Essentially I am trying to move each file in a directory into a sub directory based on that files last modified day. The command first prints only the files I want, then uses mkdir to create a folder based on the day of the month it was last modified. What I want to do after that is move each file into its associated directory, however as the command is now it moves every file into the 01 folder and prints out the following text
cp: 0653-436 12 is a directory.
Specify -r or -R to copy.
once for every directory.
does anyone know how I can fix this issue? or if there is a better way to go about it?
ls -lRt lpftp.* | awk '{print $7, $9}' | while read day file ; do mkdir -p "$day"; cp "$file" "$day"; done
The commands between do and done will be executed for each line of output, with the first thing awk prints in the day variable and the second in file (per line). I used quotes here somewhat unnecessarily, as there will not be spaces in the variables given the method by which they are set.
The safest way to do something like this -- and the fastest to execute -- is to use awk on the data to output a shell script. In awk, print the mkdir and cp commands you expect to execute. Pipe the results into head(1) until you're satisfied. Maybe look at the whole thing in less(1). Then execute as follows:
ls -lRg lpfpt.* | awk script.awk | sh -ex
That will echo the commands to standard error, and stop on the first error. If you're super absolutely sure it's right, drop the x option.
The advantage of this approach over a loop or a bunch of subprocesses in awk (with the system function) is:
you can see what's going to happen, and what's happening
speed of execution

Identify the files opened a particular process on linux

I need a script to identify the files opened a particular process on linux
To identify fd :
>cd /proc/<PID>/fd; ls |wc –l
I expect to see a list of numbers which is the list of files descriptors' number using in the process. Please show me how to see all the files using in that process.
Thanks.
The command you probably want to use is lsof. This is a better idea than digging in /proc, since the command is a more clear and a more stable way to get system information.
lsof -p pid
However, if you're interested in /proc stuff, you may notice that files /proc/<pid>/fd/x is a symlink to the file it's associated with. You can read the symlink value with readlink command. For example, this shows the terminal stdin is bound to:
$ readlink /proc/self/fd/0
/dev/pts/43
or, to get all files for some process,
ls /proc/<pid>/fd/* | xargs -L 1 readlink
While lsof is nice you can just do:
ls -l /proc/pidoftheproces/fd
lsof -p <pid number here> | wc -l
if you don't have lsof, you can do roughly the same using just /proc
eg
$ pid=1825
$ ls -1 /proc/$pid/fd/*
$ awk '!/\[/&&$6{_[$6]++}END{for(i in _)print i}' /proc/$pid/maps
You need lsof. To get the PID of the application which opened foo.txt:
lsof | grep foo.txt | awk -F\ '{print $2}'
or what Macmede said to do the opposite (list files opened by a process).
lsof | grep processName

Pipe output to use as the search specification for grep on Linux

How do I pipe the output of grep as the search pattern for another grep?
As an example:
grep <Search_term> <file1> | xargs grep <file2>
I want the output of the first grep as the search term for the second grep. The above command is treating the output of the first grep as the file name for the second grep. I tried using the -e option for the second grep, but it does not work either.
You need to use xargs's -i switch:
grep ... | xargs -ifoo grep foo file_in_which_to_search
This takes the option after -i (foo in this case) and replaces every occurrence of it in the command with the output of the first grep.
This is the same as:
grep `grep ...` file_in_which_to_search
Try
grep ... | fgrep -f - file1 file2 ...
If using Bash then you can use backticks:
> grep -e "`grep ... ...`" files
the -e flag and the double quotes are there to ensure that any output from the initial grep that starts with a hyphen isn't then interpreted as an option to the second grep.
Note that the double quoting trick (which also ensures that the output from grep is treated as a single parameter) only works with Bash. It doesn't appear to work with (t)csh.
Note also that backticks are the standard way to get the output from one program into the parameter list of another. Not all programs have a convenient way to read parameters from stdin the way that (f)grep does.
I wanted to search for text in files (using grep) that had a certain pattern in their file names (found using find) in the current directory. I used the following command:
grep -i "pattern1" $(find . -name "pattern2")
Here pattern2 is the pattern in the file names and pattern1 is the pattern searched for
within files matching pattern2.
edit: Not strictly piping but still related and quite useful...
This is what I use to search for a file from a listing:
ls -la | grep 'file-in-which-to-search'
Okay breaking the rules as this isn't an answer, just a note that I can't get any of these solutions to work.
% fgrep -f test file
works fine.
% cat test | fgrep -f - file
fgrep: -: No such file or directory
fails.
% cat test | xargs -ifoo grep foo file
xargs: illegal option -- i
usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] [-J replstr]
[-L number] [-n number [-x]] [-P maxprocs] [-s size]
[utility [argument ...]]
fails. Note that a capital I is necessary. If i use that all is good.
% grep "`cat test`" file
kinda works in that it returns a line for the terms that match but it also returns a line grep: line 3 in test: No such file or directory for each file that doesn't find a match.
Am I missing something or is this just differences in my Darwin distribution or bash shell?
I tried this way , and it works great.
[opuser#vjmachine abc]$ cat a
not problem
all
problem
first
not to get
read problem
read not problem
[opuser#vjmachine abc]$ cat b
not problem xxy
problem abcd
read problem werwer
read not problem 98989
123 not problem 345
345 problem tyu
[opuser#vjmachine abc]$ grep -e "`grep problem a`" b --col
not problem xxy
problem abcd
read problem werwer
read not problem 98989
123 not problem 345
345 problem tyu
[opuser#vjmachine abc]$
You should grep in such a way, to extract filenames only, see the parameter -l (the lowercase L):
grep -l someSearch * | xargs grep otherSearch
Because on the simple grep, the output is much more info than file names only. For instance when you do
grep someSearch *
You will pipe to xargs info like this
filename1: blablabla someSearch blablabla something else
filename2: bla someSearch bla otherSearch
...
Piping any of above line makes nonsense to pass to xargs.
But when you do grep -l someSearch *, your output will look like this:
filename1
filename2
Such an output can be passed now to xargs
I have found the following command to work using $() with my first command inside the parenthesis to have the shell execute it first.
grep $(dig +short) file
I use this to look through files for an IP address when I am given a host name.

Resources