The "->" Notation in linux - linux

What does the "->" notation mean in Linux .
eg. When I do ls -l in a particular folder, I get the following.
lrwxrwxrwx 1 root root 29 Feb 27 12:23 ojdbc.jar -> /apps/hadoop/sqoop/ojdbc6.jar
Is the first file a placeholder of the second one?
Kind Regards.

It means the file is not a physical file, but a symbolic link pointing to the file to the right of the arrow.

The command "ls -l" uses "->" to denote a symbolic-link (that is, a psuedo-file which only points to another file).
In your example ojdbc.jar is a symbolic-link to /apps/hadoop/sqoop/ojdbc6.jar.
I'm not aware that this meaning holds beyond ls, however.

Related

What is dot in ls -l command on Linux?

I ran ls -l on my centOS 6.10 on a specific file.
My question is what is the meaning of this dot(.) before 1?
-rw-r--r--. 1 root root 575 Oct 23
It means that the file has an SELinux context. Use ls -Z to see the actual SElinux context values. It's documented on the "info" file for GNU Coreutils: What-information-is-listed.
The relevant quote:
Following the file mode bits is a single character that specifies
whether an alternate access method such as an access control list
applies to the file. When the character following the file mode
bits is a space, there is no alternate access method. When it is a
printing character, then there is such a method.
GNU ‘ls’ uses a ‘.’ character to indicate a file with a security
context, but no other alternate access method.
A file with any other combination of alternate access methods is
marked with a ‘+’ character.
Follow the link for more details

Run script on specific file in all subdirs

I've written a script (foo) which makes a simple sed replacement on text in the input file. I have a directory (a) containing a large number of subdirectories (a/b1, a/b2 etc) which all have the same subdirs (c, etc) and contain a file with the same name (d). So the rough structure is:
a/
-b1/
--c/
---d
-b2/
--c/
---d
-b3/
--c/
---d
I want to run my script on every file (d) in the tree. Unfortunately the following doesn't work:
sudo sh foo a/*/c/d
how do I use wildcards in a bash command like this? Do I have to use find with specific max and mindepth, or is there a more elegant solution?
The wildcard expansion in your example should work, and no find should be needed. I assume a b and c are just some generic file names to simplify the question. Do any of your folders/files contain spaces?
If you do:
ls -l a/*/d/c
are you getting the files you need listed? If so, then it is how you handle the $* in your script file. Mind sharing it with us?
As you can see, wildcard expansion works
$ ls -l a/*/c/d
-rw-r--r-- 1 user wheel 0 15 Apr 08:05 a/b1/c/d
-rw-r--r-- 1 user wheel 0 15 Apr 08:05 a/b2/c/d
-rw-r--r-- 1 user wheel 0 15 Apr 08:05 a/b3/c/d

Bash script to rename a heap of folders

I have a directory that looks a little like this:
drw-r--r-- 1 root root 0 Jan 24 17:26 -=1=-directoryname
drw-r--r-- 1 root root 0 Jan 24 17:26 -=2=-directoryname
drw-r--r-- 1 root root 0 Jan 24 17:26 -=3=-directoryname
drw-r--r-- 1 root root 0 Jan 24 17:26 -=4=-directoryname
drw-r--r-- 1 root root 0 Jan 24 17:26 -=5=-directoryname
I am trying to write a script to change these folders from
-=1=- Folder#1
to strip off the "-=1=-" section, but alas I am having no luck.
Can anyone help me find a solution to this?
So far my script below has failed me.
#!/bin/bash
for i in {1..250}
do
rename "-=$i=-" ""*
i=i+1
done
I have used the 1..250 because there are 250 folders.
Given the number, you can manufacture the names and use the mv command:
#!/bin/bash
for i in {1..250}
do
mv "-=$i=- Folder#$i" "Folder#$i"
done
With the Perl-based rename command (sometimes called prename), you could use:
rename 's/-=\d+=- //' -=*=-*Folder#*
or, given the revised question (the information after the pattern isn't fixed):
rename 's/-=\d+=- //' -=*=-*
This worked! Can you please explain how it worked? What's the \d+ for?
The \d is Perl regex notation for a digit 0..9. The + modifier indicates 'one or more'. So, the regex part of s/-=\d+=- // looks for a minus, an equals, one or more digits, an equals, a minus and a space. The replace part converts all of the matched material into an empty string. It's all surrounded by single quotes so the shell leaves it alone (though there's only the backslash that's a shell metacharacter in that substitute command, but the backslash and space would need protecting if you omitted the quotes).
I'm not sure how you'd use the C-based rename command for this job; it is much less powerful than the Perl-based version.

How can I list files with their absolute path, group and user in linux?

I want to generate recursive file listings with their full information: absolute path, group, user, created time & etc.
But, if I use a find command like find /, I only get the relative path. I would like -rw-r--r-- 1 root root 669319168 Mar 11 17:10 /root/valhalla-i386-disc2.iso
find has a specific action which essentially does what you want
-ls True; list current file in ls -dils format on standard output.

linux find on multiple patterns

I need to do a find on roughly 1500 file names and was wondering if there is a way to execute simultaneous find commands at the same time.
Right now I do something like
for fil in $(cat my_file)
do
find . -name $fil >> outputfile
done
is there a way to spawn multiple instances of find to speed up the process. Right now it takes about 7 hours to run this loop one file at a time.
Given the 7-hour runtime you mention, I presume the file system has some millions of files in it so that OS disk buffers loaded in one query are being reused before the next query begins. You can test this hypothesis by timing the same find a few times, as in following example.
tini ~ > time find . -name IMG_0772.JPG -ls
25430459 9504 lrwxrwxrwx 1 omg omg 9732338 Aug 1 01:33 ./pix/rainbow/IMG_0772.JPG
20341373 5024 -rwxr-xr-x 1 omg omg 5144339 Apr 22 2009 ./pc/2009-04/IMG_0772.JPG
22678808 2848 -rwxr-xr-x 1 omg omg 2916237 Jul 21 21:03 ./pc/2012-07/IMG_0772.JPG
real 0m15.823s
user 0m0.908s
sys 0m1.608s
tini ~ > time find . -name IMG_0772.JPG -ls
25430459 9504 lrwxrwxrwx 1 omg omg 9732338 Aug 1 01:33 ./pix/rainbow/IMG_0772.JPG
20341373 5024 -rwxr-xr-x 1 omg omg 5144339 Apr 22 2009 ./pc/2009-04/IMG_0772.JPG
22678808 2848 -rwxr-xr-x 1 omg omg 2916237 Jul 21 21:03 ./pc/2012-07/IMG_0772.JPG
real 0m0.715s
user 0m0.340s
sys 0m0.368s
In the example, the second find ran much faster because the OS still had buffers in RAM from the first find. [On my small Linux 3.2.0-32 system, according to top at the moment 2.5GB of RAM is buffers, 0.3GB is free, and 3.8GB in use (ie about 1.3GB for programs and OS).]
Anyhow, to speed up processing, you need to find a way to make better use of OS disk buffering. For example, double or quadruple your system memory. For an alternative, try the locate command. The query
time locate IMG_0772.JPG
consistently takes under a second on my system. You may wish to run updatedb just before starting the job that finds the 1500 file names. See man updatedb. If directory . in your find's gives only a small part of the overall file system, so that the locate database includes numerous irrelevant files, use various prune options when you run updatedb, to minimize the size of the locate database that is accessed when you run locate; and afterwards, run a plain updatedb to restore other filenames to the locate database. Using locate you probably can cut the run time to 20 minutes.
This solution calls find and fgrep only once:
find . | fgrep -f my_file > outputfile
I assume that my_file has a list of files you are looking for, with each name on a separate line.
Explanation
The find command finds all the files (including directories) in the current directory. Its output is a list of files/directories, one per line
The fgrep command search from the output of the find command, but instead of specifying the search term on the command line, it gets the search terms from my_file--that's what the -f flag for.
The output of the fgrep command, which is the list of files you are looking for, are redirected into outputfile
maybe something like
find . \( -name file1 -o -name file2 -o ... \) >outputfile
You could build lines of this kind, depending on the number of names in my_file:
find . \( $(xargs <my_file printf "-name %s -o " | sed 's/-o $//') \) >outputfile
is there a way to spawn multiple instances of find to speed up the process.
This is not how you want to solve the problem, since find is I/O- and FS-limited.
Either use multiple -name arguments grouped together with -o in order to use one find command to look for multiple filenames at once, or find all files once and use a tool such as grep to search the resultant list of files for the filenames of interest.

Resources