Why cat command not working in script - linux

I have the following script and it has an error. I am trying to merge all the files into one large file. From the command line the cat commant works fine and the content is printed to the redirected file. From script it is working sometime but not the other time. I dont know why its behaving abnormally. Please help.
#!/bin/bash
### For loop starts ###
for D in `find . -type d`
do
combo=`find $D -maxdepth 1 -type f -name "combo.txt"`
cat $combo >> bigcombo.tsv
done
Here is the output of bash -x app.sh
++ find . -type d
+ for D in '`find . -type d`'
++ find . -maxdepth 1 -type f -name combo.txt
+ combo=
+ cat
^C
UPDATE:
The following worked for me. There was issue with the path. I still dont know what was the issue so answer is welcome.
#!/bin/bash
### For loop starts ###
rm -rf bigcombo.tsv
for D in `find . -type d`
do
psi=`find $D -maxdepth 1 -type f -name "*.psi_filtered"`
# This will give us only the directory path from find result i.e. removing filename.
directory=$(dirname "${psi}")
cat $directory"/selectedcombo.txt" >> bigcombo.tsv
done

The obvious problem is that you are attempting to cat a file which doesn't exist.
Secondary problems are related to efficiency and correctness. Running two nested loops is best avoided, though splitting the action into two steps is merely inelegant here; the inner loop will only execute once, at most. Capturing command results into variables is a common beginner antipattern; a variable which is only used once can often be avoided, and avoids littering the shell's memory with cruft (and coincidentally solves the multiple problems with missing quoting - a variable which contains a file or directory name should basically always be interpolated in double quotes). Redirection is better performed outside any containing loop;
rm file
while something; do
another thing >>file
done
will open, seek to the end of the file, write, and close the file as many times as the loop runs, whereas
while something; do
another thing
done >file
only performs the open, seek, and close actions once, and avoids having to clear the file before starting the loop. Though your script can be refactored to not have any loops at all;
find ./*/ -type f -name "*.psi_filtered" -execdir cat selectedcombo.txt \;> bigcombo.tsv
Depending on your problem, it might be an error for there to be directories which contain combo.txt but which do not contain any *.psi_filtered files. Perhaps you want to locate and examine these directories.

Related

find all files with certain extensions then execute

Why does running this command give me error message: No such file or directory ?
for i in `find ~/desktop -name '*.py'` ; do ./$i ; done
The complete error message makes it much more clear what the problem is:
bash: .//home/youruser/desktop/foo.py: No such file or directory
You can see that there is indeed no such file:
$ .//home/youruser/desktop/foo.py
bash: .//home/youruser/desktop/foo.py: No such file or directory
$ ls -l .//home/youruser/desktop/foo.py
ls: cannot access './/home/youruser/desktop/foo.py': No such file or directory
Here's instead how you can run a file /home/youruser/desktop/foo.py:
$ /home/youruser/desktop/foo.py
Hello World
So to run it in your loop, you can do:
for i in `find ~/desktop -name '*.py'` ; do $i ; done
Here's a better way of doing the same thing:
find ~/desktop -name '*.py' -exec {} \;
or with a shell loop:
find ~/desktop -name '*.py' -print0 | while IFS= read -d '' -r file; do "$file"; done
For an explanation of what ./ is and does, and why it makes no sense here, see this question
Try find and exec option. http://man7.org/linux/man-pages/man1/find.1.html
-exec command ;
Execute command; true if 0 status is returned. All following
arguments to find are taken to be arguments to the command
until an argument consisting of `;' is encountered. The
string `{}' is replaced by the current file name being
processed everywhere it occurs in the arguments to the
command, not just in arguments where it is alone, as in some
versions of find. Both of these constructions might need to
be escaped (with a `\') or quoted to protect them from
expansion by the shell. See the EXAMPLES section for examples
of the use of the -exec option. The specified command is run
once for each matched file. The command is executed in the
starting directory. There are unavoidable security problems
surrounding use of the -exec action; you should use the
-execdir option instead.
-exec command {} +
This variant of the -exec action runs the specified command on
the selected files, but the command line is built by appending
each selected file name at the end; the total number of
invocations of the command will be much less than the number
of matched files. The command line is built in much the same
way that xargs builds its command lines. Only one instance of
`{}' is allowed within the command, and (when find is being
invoked from a shell) it should be quoted (for example, '{}')
to protect it from interpretation by shells. The command is
executed in the starting directory. If any invocation with
the `+' form returns a non-zero value as exit status, then
find returns a non-zero exit status. If find encounters an
error, this can sometimes cause an immediate exit, so some
pending commands may not be run at all. This variant of -exec
always returns true.
The paths returned by the find statement will be absolute paths, like ~/desktop/program.py. If you put ./ in front of them, you get paths like ./~/desktop/ which don’t exist.
Replace ./$i with "$i" (the quotes to take care of file names with spaces etc.).
You should use $i and not ./$i
I was doing the same thing this exact moment. I wanted a script to find if there's any flac files in the directory and convert it to opus.
Here is my solution:
if test -n "$(find ./ -maxdepth 1 -name '*.flac' -print -quit)"
then
do this
else
do nothing
fi

find -name with and without xargs give different result

I'd need to perform some operations on a lot of files in a lot of dirs, say checking whether they are password protected or not.
I've created a bash script (fileproc.sh) and I'd want to check if it works so I did a silly thing just to see if it works:
#!/bin/sh
echo 'File: ' + $1
Then if I run a simple
find . -name "*.zip" -type f
I have a long list of .zip files as expectd.
If I run
find . -name "*.zip" -type f -print0 | xargs ./fileproc.sh
I have only three files.
What am I doing wrong?
Thanks
xargs is used to run the command with more than one argument, see man xarg. It will append the lines printed by find to the command line until a system-dependent limit is reached.
$1 is just the first command line argument, you don't see the second and following arguments.
In your script try
echo 'File(s): ' "$#"

Find creates a file when I use {}

I try to use find to create a very simple way to add newline to a file. I know there are tons of other ways to do this but it bugs the hell out of me that I cannot get this way to work.
So - I'm NOT asking how to add newline to a file - I'm asking why find is weird.
find . -type f -iname 'file' -exec echo >> {} \;
results in a new file created named "{}" with the newline in while (to check that find works on my computer):
find . -type f -iname 'file' -exec echo {} \;
prints "./file".
So the >> makes find confused. The question is why and how do I solve that?
I'm asking why find is weird.
It isn't. This has nothing to do with find. In fact, when the file is created, find hasn't even started to run.
>> roughly means "redirect stdout to the end of this file, create a new file when necessary". Note how nothing of this has anything to do with whatever is left of the >>.
Redirection is a feature of the shell, find knows nothing about the redirection and the shell knows nothing about find. >> doesn't magically change its meaning just because you happened to call find. It still means the exact same thing.
If you want to use a shell feature whithin -exec, you need to use a shell within -exec:
find . -type f -iname 'file' -exec sh -c 'echo >> "{}"' \;
While the question itself has already been answered, I'd like to point out that you don't strictly need to make find do everything, rather you can use other available facilities to work together with it, for example:
find . -type f -iname 'file' | while read file; do echo >>"$file"; done
This approach also has the advantage of not executing a new process for every match, which is irrelevant in this case but potentially important if there are thousands of matches and the exec is relatively heavy.

Using Perl-based rename command with find in Bash

I just stumbled upon Perl today while playing around with Bash scripting. When I tried to remove blank spaces in multiple file names, I found this post, which helped me a lot.
After a lot of struggling, I finally understand the rename and substitution commands and their syntax. I wanted to try to replace all "_(x)" at the end of file names with "x", due to duplicate files. But when I try to do it myself, it just does not seem to work. I have three questions with the following code:
Why is nothing executed when I run it?
I used redirection to show me the success note as an error, so I know what happened. What did I do wrong about that?
After a lot of research, I still do not entirely understand file descriptors and redirection in Bash as well as the syntax for the substitute function in Perl. Can somebody give give me a link for a good tutorial?
find -name "*_(*)." -type f | \
rename 's/)././g' && \
find -name "*_(*." -type f | \
rename 's/_(//g' 2>&1
You either need to use xargs or you need to use find's ability to execute commands:
find -name "*_(*)." -type f | xargs rename 's/)././g'
find -name "*_(*." -type f | xargs rename 's/_(//g'
Or:
find -name "*_(*)." -type f -exec rename 's/)././g' {} +
find -name "*_(*." -type f -exec rename 's/_(//g' {} +
In both cases, the file names are added to the command line of rename. As it was, rename would have to read its standard input to discover the file names — and it doesn't.
Does the first find find the files you want? Is the dot at the end of the pattern needed? Do the regexes do what you expect? OK, let's debug some of those too.
You could do it all in one command with a more complex regex:
find . -name "*_(*)" -type f -exec rename 's/_\((\d+)\)$/$1/' {} +
The find pattern is corrected to lose the requirement of a trailing .. If the _(x) is inserted before the extension, then you'd need "*_(*).*" as the pattern for find (and you'll need to revise the Perl regexes).
The Perl substitute needs dissection:
The \( matches an open parenthesis.
The ( starts a capture group.
The \d+ looks for 'one or more digits'.
The ) stops the capture group. It is the first and only, so it is given the number 1.
The \) matches a close parenthesis.
The $ matches the end of the file name.
The $1 in the replacement puts the value of capture group 1 into the replacement text.
In your code, the 2>&1 sent the error messages from the second rename command to standard output instead of standard error. That really doesn't help much here.
You need two separate tutorials; you are not going to find one tutorial that covers I/O redirection in Bash and regular expressions in Perl.
The 'official' Perl regular expression tutorial is:
perlretut, also available as perldoc perlretut on your machine.
The Bash manual covers I/O redirection, but it is somewhat terse:
I/O Redirections.

unix bash - save environment variable and loop

Let's say you have a first.sh file in a directory: "/home/userbob/scripts/foo/". Basically I would like to know how to loop through specific directories, each time going back up to a higher level directory and repeating.
The .sh file has something like this pseudocode:
#!/bin/bash
curdi={$PATH} #where the first.sh file sits on the server
FOLDERS="$curdi/waffles/inner/
$curdi/pancakes/inner/
$curdi/bagels/inner/"
for f in $FOLDERS
do
cd $f
cp innerofinner/* .
cd $curdi
done
The idea is to somehow copy all the contents of /home/userbob/scripts/foo/waffles/inner/innerofinner to /home/userbob/scripts/foo/waffles/inner/
(and basically repeating just with the path having pancakes, bagels.etc.)
Can't do it for all directories (*) under /home/userbob/scripts/foo/ because there are some that I don't want to copy.
This should do it:
for name in waffles pancakes bagels
do
cp "$curdi/$name/inner/innferofinner/"* "$curdi/waffles/inner"
done
Walking file trees? Sounds like a job for find!
#!/usr/local/bin/env bash
# only environment variables should be all-caps
dirs=({bagels,pancakes}/inner)
find "${dirs[#]}" -type d -maxdepth 1 -mindepth 1 -name innerofinner -execdir bash -c 'cp "$1"/* .' -- {} \;
I did a partial path and assumed a working directory of /home/userbob/scripts/foo. An absolute path would work, too, and would look like
dirs=(/home/userbob/scripts/foo/{bagels,pancakes}/inner)
This finds all directories exactly one level below the listed directory that are named "innerofinner" and, in their parent directories, executs bash and a simple cp script.
If you're wondering how this works, read below.
The dirs=() syntax creates an empty array named dirs. dirs+(a b) creates an array with a at index 0 and b at index 1. Any whitespace-delimited string will work, here. In a shell script {a,b,c} expands to a b c but A{a,b,c}B expands to AaB AbB AcB. So specifying {bagels,pancakes}/inner is just a way to say both bagels/inner and pancakes/inner without having to type as much.
A variable in bash can be expanded with $foo or with ${foo}; these are the same. An array in shell can be expanded to all of its elements with ${foo[#]} delimited by spaces (if you know perl or php this will make some sense) and quoting the expansion (always a good idea in shell!) prevents spaces innside the variable from being processed again by the shell. Thus, "${dir[#]}" becomes bagels/inner pancakes/inner.
Knowing this we see that the find command has become find bagels/inner pancakes/inner -maxdepth 1 -mindepth 1 -type d -name innerofinner and if you execute this it will return exactly two lines: both full paths to each innerofinner directory. All we want now is to do something for each one, which -execdir does nicely.
Use a recursive function or invoke the script recursively.
I am not sure if I understand your problem statement correctly. Your psuedo code seems good. But, I see a problem with the following line.
curdi={$PWD}
It does not give you the directory where the script resides but gives the directory you are in. If your script directory is in the path and you are running the script from your home directory then $curdi would point to your home directory and not the directory where your script resides. This will lead to undesired results.
Incidentally, if you really wanted to do it in the way that your pseudo-script attempts it, you'd do it like this
#!/usr/bin/env bash
for f in "$PWD"/{waffles,pancakes,bagels}/inner ; do
cd "$f"
cp innerofinner/* .
# if you know for sure that it's one level up
cd ..
done
Presuming that $PWD is a good enough indicator of "current" directory for you. Me, I'd pass it in to the script.
#!/usr/bin/env bash
base="${1-$PWD}"
for f in "$base"/{waffles,pancakes,bagels}/inner ; do
cd "$f"
cp innerofinner/* .
cd ..
done
at call it like
breakfast.sh /home/userbob/scripts/foo/
find . \( -iname '*waffles*innferofinner*' -o \
-iname '*pancakes*innferofinner*' -o \
-iname '*baggels*innferofinner*' \) \
-type f \
-exec cp {} "`echo {} | sed 's:\(.*\)/[^/]\+/[^/]\+:\1:'` \;
Should do. Finds every file in the desired subdirs, then copies it based on its name.
HTH

Resources