This is what I used:
for i in `find some -type f -name *.class`
I got:
some/folder/subOne/fileOne.class
some/folder/subOne/fileTwo.class
some/other/sub/file.class
next, I would like to get rid of the "some/" for each value of $i. What command can I use? Do I HAVE to save them into a file first?
Thanks
$ i=some/other/sub/file.class
$ echo ${i#some/}
other/sub/file.class
Bash has simple string manipulation built in. See also ${i%.class} and the basename and dirname commands.
awk :)
http://en.wikipedia.org/wiki/AWK
EDIT: Oh and you can pipe commands together, so the output of the first command acts as the input for the second. Like 'cat example.txt | less' will output the file through a paginator.
Related
I'm pretty sure I've seen this done before but I can't remember the exact syntax.
Suppose you have a couple of files with different file extensions:
foo.txt
bar.rtf
index.html
and instead of doing something with all of them (cat *), you only want to run a command on 2 of the 3 file extensions.
Can't you do something like this?
cat ${*.txt|*.rtf}
I'm sure there's some find trickery to identify the files first and pipe them to a command, but I think bash supports what I'm talking about without having to do that.
The syntax you want is cat *.{txt,rft}. A comma is used instead of a pipe.
$ echo foo > foo.txt
$ echo bar > bar.rft
$ echo "bar txt" > bar.txt
$ echo "test" > index.html
$ cat *.{txt,rft}
bar txt
foo
bar
$ ls *.{txt,rft}
bar.rft bar.txt foo.txt
But as Anthony Geoghegan said in their answer there's a simpler approach you can use.
Shell globbing is much more basic than regular expressions. If you want to cat all the files which have a .txt or .rtf suffix, you'd simply use:
cat *.txt *.rtf
The glob patterns will be expanded to list all the filenames that match the pattern. In your case, the above command would call the cat command with foo.txt and bar.rtf as its arguments.
Here's a simple way i use to do it using command substitution.
cat $(find . -type f \( -name "*.txt" -o -name "*.rtf" \))
But Anthony Geoghegan's answer is much simpler. I learned from it too.
Hi guys I'm trying to use sed to delete part of a string (its a directory). I'm using it like so
sed -i 's/$1//g' ~/Desktop/RecyclingBin/logs/$1
whenever I open the text file it appears to be blank. Any help would be appreciated..
Also if there's an easier way to output a files location to a text file without the actual filename being in the output that would make life a lot easier currently using:
find $PWD -type d -name "*$1*" >> ~/Desktop/RecyclingBin/logs/$1
thank you in advance!
You can do this in find itself:
find . -type d -name "*$1*" -exec bash -c 'echo "${1##*/}"' - {} \;
Try to use my example
sed -i "s/$1//g" ~/Desktop/RecyclingBin/logs/$1
It works for me.
Always remember, if you want to use a variable in sed expression , you should use "" like the example posted above
Otherwise for normal substitution use single quotes..
If you want to use a variable in sed expression , you should use "" like the example posted above. This is the feature of sed
Otherwise for normal substitution use single quotes..
I have a shell-script which lists all the file names in a directory and store them in a new file.
The problem is that when I execute this script with the nohup command, it lists the first name four times instead of listing the correct names.
Commenting the problem with other programmers they think that the problem may be the ls command.
Part of my code is the following:
for i in $( ls -1 ./Datasets/); do
awk '{print $1}' ./genes.txt | head -$num_lineas | tail -1 >> ./aux
let num_lineas=$num_lineas-1
done
Do you know an alternative to ls that works well with nohup?
Thanks.
Don't use ls to feed the loop, use:
for i in ./Datasets/*; do
or if subdirectories are of interest
for i in ./Datasets/*/*; do
Lastly, and more correctly, use find if you need the entire tree below Datasets:
find ./Datasets -type f | while IFS= read -r file; do
(do stuff with $file)
done
Others frown, but there is nothing wrong with also using find as:
for file in $(find ./Datasets -type f); do
(do stuff with $file)
done
Just choose the syntax that most closely meets your needs.
First of all, don't parse ls! A simple glob will suffice. Secondly, your awk | head | tail chain can be simplified by only printing the first column of the line that you're interested in using awk. Thirdly, you can redirect the output of your loop to a file, rather than using >>.
Incorporating all of those changes into your script:
for i in Datasets/*; do
awk -v n="$(( num_lineas-- ))" 'NR==n{print $1}' genes.txt
done > aux
Every time the loop goes round, the value of $num_lineas will decrease by 1.
In terms of your problem with nohup, I would recommend looking into using something like screen, which is known to be a better solution for maintaining a session between logins.
I'm trying to create a really simple bash script, which will list the first few lines of every file in a specific directory. The directory should be specified by the argument.
I think that the Grep command should be used, but I have really no idea how.
My existing script does not seem to work at all, so it's no use putting it in here.
Use head command:
head -3 /path/to/dir/*
For any answer using head and *, redirect stderr to /dev/null unless you want to see errors like:
head: error reading ‘tmp’: Is a directory
for file in dir/*; do
echo "-- $file --"
head "$file"
echo
done
If you want the first few lines of all files ending in .txt, try
head *.txt
or
head --lines=3 *.txt
Because bash does filename expansion (globbing) by default, you can just let your shell expand input and let head do the rest:
head *
The * wildcard expands to all the filenames in the working directory. On zsh you can see this nicely, when it autocompletes your commandline when you press tab.
You can change the amount of lines with the -n argument to head.
If you want to do this recursively:
find . \! -type d -exec head '{}' +
Assuming:
the path of file f is ~/f
"which f" shows "~/f",
Then,
which f | cat shows ~/f. So cat here is applied to the quotation of ~/f, which is different with cat ~/f.
My question is: how I could use one command composed of which and cat to achieve the result of cat ~/f? When I don't know the result of which f in advance, using this composition can be very convenient. Currently, if I don't know the result of which f in advance, I have to invoke which f first, and copy-paste the result to feed less.
A related question is: how can I assign the result of which f to a variable?
Thanks a lot!
Try:
cat `which ~/f`
For the related question:
foo=`which ~/f`
echo $foo
cat "`which f`"
Like so in bash:
cat "$(which f)"
var="$(which f)"
What you want is:
cat `which f`
In which f | cat the cat program gets the output of which f on standard input. That then just passes that standard input through, so the result is the same as a plain which f. In the call cat ~/f the data is passed as a parameter to the command. cat then opens the file ~/f and displays it's contents.
To get the output of which f as a parameter to cat you can, as others have answered, use backticks or $():
cat `which f`
cat $(which f)
Here the shell takes the output of which f and inserts it as a parameter for cat.
In bash, you can use:
cat "$(which f)"
to output the contents of the f that which finds. This, like the backtick solution, takes the output of the command within $(...) and uses that as a parameter to the cat command.
I prefer the $(...) to the backtick method since the former can be nested in more complex situations.
Assigning the output of which to a variable is done similarly:
full_f="$(which f)"
In both cases, it's better to use the quotes in case f, or it's path, contains spaces, as heinous as that crime is :-)
I've often used a similar trick when I want to edit a small group of files with similar names under a given sub-directory:
vim $(find . -type f -name Makefile)
which will give me a single vim session for all the makefiles (obviously, if there were a large number, I'd be using sed or perl to modify them en masse instead of vim).
cat echos the contents of files to the standard output. When you write stuff | cat, the file cat works on is the standard input, which is connected to the output of stuff (because pipes are files, just like nearly everything else in unix).
There is no quoting going on in the sense that a lisp programmer would use the word.