How to solve this ambiguous redirect error - linux

I want to run a program (anchor) in all the .fa files in a directory, and append the output back into the input files (as next lines of original input content). For that I have tried:
for f in ./*.fa ; do ./anchor $f -d ./; done >> $f
and it gives the error:
bash: $f: ambiguous redirect
I understand bash is objecting to post the output content in the input file, but as I am recently migrating from windows system, there I'm doing it as:
for %F in ("*.fa") do anchor %F -d ./ >> %%F
which gives me the desired output.
Although this might seems strange to append output in input files, but how can I do that in shell?
Thanks
ps. I also try to use $$ in output redirection, but it forms a separate output file with different name and the original input content are also not merged in it.

the most logical way of doing it is to redirect from inside the loop instead, but not directly (thanks 123 for the comment: file cannot be input and output as the same time, maybe here it could work since windows loop seems to work, but let's not take useless risks...)
for f in ./*.fa ; do ./anchor $f -d ./ > /tmp/something; cat /tmp/something >> $f; done
BTW: I wouldn't dare trying to explain what your original code does, is f defined/when $f is evaluated (before or after entering the for loop).
My guess is that $f just not evaluated and considered as $f literally, which confuses bash.
At any rate, it's incorrect.
EDIT: the windows version
for %F in ("*.fa") do anchor %F -d ./ >> %%F
does the redirection inside the loop (unlike your unix attempt), and it's really surprising that it works because of windows file locking...
What could happen (not sure) is that windows doesn't try to append to the file before something is issued on standard output, and at that moment, the program has closed the file as input.

Related

How to replace text strings (by bulk) after getting the results by using grep

One of my Linux MySQL servers suffered from a crash. So I put back a backup, however this time the MySQL is running local (localhost) instead of remotely (IP-address).
Thanks to Stack Overflow users I found an excellent command to find the IP-address in all .php files in a given directory! The command I am using for this is:
grep -r -l --include="*.php" "100.110.120.130" .
This outputs the necessary files with its location ofcourse. If it were less than 10 results, I would simply change them by hand obviously. However I received over 200 hits/results.
So now I want to know if there is a safe command which replaces the IP-address (example: 100.110.120.130) with the text "localhost" instead for all .php files in the given directory (/var/www/vhosts/) recursively.
And maybe, if only possible and not to much work, also output the changed lines to a file? I don't know if thats even possible.
Maybe someone can provide me with a working solution? To be honest, I dont dare to fool around out of the blue with this. Thats why I created a new thread.
The most standard way of replacing a string in multiple files would be to use a tool such as sed. The list of files you've obtained via grep could be read line by line (when output to a file) using a while loop in combination with sed.
$ grep -r -l --include="*.php" "100.110.120.130" . > list.txt
# this will output all matching files to list.txt
Replacing IP in matched files:
while read -r line ; do echo "$line" >> updated.txt ; sed -i 's/100.110.120.130/localhost/g' "${line}" ; done<list.txt
This will take list.txt and read it line by line to the sed command which should replace all occurrences of the IP to "localhost". The echo command directly before sed outputs all the filenames that will be modified into a file updated.txt (it isn't necessary though as list.txt contains the same exact filenames, although it could be used as a means of verification perhaps).
To do a dry run before modifying all of the matched files remove the
-i from the sed command and it will print the output to stdout
instead of in-place modifying the files.

Docker bash'ing with find

I am having a hell of a time attempting to get a bash script to work as expected (as it does in a normal bash session) on a Docker run.
The goal is to replace all of the symlinked files within the official java container with their actual file within the JAVA_HOME directory, so everything is contained within the java directory and not outside of it,
e.g.
$JAVA_HOME/jre/lib/security/java.policy <--- is symlinked to ---> /etc/java-7-openjdk/security/java.policy
The end result should be the file located at: $JAVA_HOME/jre/lib/security/java.policy
The setup:
docker run java:7u91 /bin/bash -cxe "find /usr/lib/jvm/**/jre -type l | while read f; do echo $f; cp --remove-destination $(readlink $f) $f; done;"
I had attempted several different methods of effectively this, with xargs and exec all to no avail.
Any suggestions at this point would be appreciated.
It looks like this is what is happening: $(readlink $f) is not returning anything on the files that are not symbolic links (only works on symbolic links). Therefore, that expression is essentially nothing/empty.
So, only the $f is returning a value. Therefore, if the expression was evaluated, it would print cp --remove-destination VALUE_OF_$F;, and the $f would look like it was the first parameter of the cp command, with no second parameter present. That is why the 'destination' is missing.
Also, you need to consider the fact that putting your command inside of double quotes like that is presenting a problem. The variables will be parsed on the host rather than in the docker container. Replace the double quotes with single quotes to prevent that from happening.

Extending a script to loop over multiple files and generate output names

I have following script (named vid2gif.sh) to convert a video file to gif:
#! /bin/bash
ffmpeg -i $1 /tmp/gif/out%04d.gif
gifsicle --delay=10 --loop /tmp/gif/*.gif > $2
I can convert a file using command:
vid2gif.sh myvid.mp4 myvid.gif
How can I make it to convert all mp4 files in a folder? That is, how can I make following command work:
vid2gif.sh *.mp4
The script should output files as *.gif. Thanks for your help.
#!/bin/sh
for f; do
tempdir=$(mktemp -t -d gifdir.XXXXXX)
ffmpeg -i "$f" "$tempdir/out%04d.gif"
gifsicle --delay=10 --loop "$tempdir"/*.gif >"${f%.*}.gif"
rm -rf "$tempdir"
done
Let's go over how this works:
Iteration
for f; do
is equivalent to for f in "$#"; that is to say, it loops over all command-line arguments. If instead you wanted to loop over all MP4s in the current directory, this would be for f in *.mp4; do, or to loop over all MP4s named in the directory passed as the first command line argument, it would be for f in "$1"/*.mp4; do. To support either usage -- but go with the first one if no directory is passed -- it would be for f in "${1:-.}"/*.mp4; do.
Temporary directory use
Because the original script would reuse /tmp/gif for everything, you'd get files from one input source being used in others. This is best avoided by creating a new temporary directory for each input file, which mktemp will automate.
Creating the .gif name
"${f%.*}" is a parameter expansion which removes everything after the last . in a file; see BashFAQ #100 for documentation on string manipulation in bash in general, including this particular form.
Thus, "${f%.*}.gif" strips the existing extension, and adds a .gif extension.

How to execute Linux shell variables within double quotes?

I have the following hacking-challenge, where we don't know, if there is a valid solution.
We have the following server script:
read s # read user input into var s
echo "$s"
# tests if it starts with 'a-f'
echo "$s" > "/home/user/${s}.txt"
We only control the input "$s". Is there a possibility to send OS-commands like uname or do you think "no way"?
I don't see any avenue for executing arbitrary commands. The script quotes $s every time it is referenced, so that limits what you can do.
The only serious attack vector I see is that the echo statement writes to a file name based on $s. Since you control $s, you can cause the script to write to some unexpected locations.
$s could contain a string like bob/important.txt. This script would then overwrite /home/user/bob/important.txt if executed with sufficient permissions. Sorry, Bob!
Or, worse, $s could be bob/../../../etc/passwd. The script would try to write to /home/user/bob/../../../etc/passwd. If the script is running as root... uh oh!
It's important to note that the script can only write to these places if it has the right permissions.
You could embed unusual characters in $s that would cause irregular file names to be created. Un-careful scripts could be taken advantage of. For example, if $s were foo -rf . bar, then the file /home/user/foo -rf . bar.txt would be created.
If someone ran for file in /home/user; rm $file; done they'd have a surprise on their hands. They would end up running rm /home/user/foo -rf . bar.txt, which is a disaster. If you take out /home/user/foo and bar.txt you're left with rm -rf . — everything in the current directory is deleted. Oops!
(They should have quoted "$file"!)
And there are two other minor things which, while I don't know how to take advantage of them maliciously, do cause the script to behave slightly differently than intended.
read allows backslashes to escape characters like space and newline. You can enter \space to embed spaces and \enter to have read parse multiple lines of input.
echo accepts a couple of flags. If $s is -n or -e then it won't actually echo $s; rather, it will interpret $s as a command-line flag.
Use read -r s or any \ will be lost/missinterpreted by your command.
read -r s?"Your input: "
if [ -n "${s}" ]
then
# "filter" file name from command
echo "${s##*/}" | sed 's|^ *\([[:alnum:]_]\{1,\}\)[[:blank:]].*|/home/user/\1.txt|' | read Output
(
# put any limitation on user here
ulimit -t 5 1>/dev/null 2>&1
`${read}`
) > ${OutPut}
else
echo "Bad command" > /home/user/Error.txt
fi
Sure:
read s
$s > /home/user/"$s".txt
If I enter uname, this prints Linux. But beware: this is a security nightmare. What if someone enters rm -rf $HOME? You'd also have issues with commands containing a slash.

Open the lastest downloaded file with bash script

Below is my attempt at this problem. It's a functional script, but I have to specify the application to be used for each file type. Since this information regarding default application must be stored somewhere on Linux / Ubuntu already, how may I access them and incorporate into my script?
Also, can my script be more "elegant" in any way?
Thank you for helping a Bash script beginner! I appreciate any comment.
#!/bin/bash
# Open the latest file in ~/Downloads
filename=$(ls -t ~/Downloads | head -1)
filetype=$(echo -n $filename | tail -c -3)
if [ $filetype == "txt" ]; then
leafpad ~/Downloads/$filename
elif [ $filetype == "pdf" ]; then
evince ~/Downloads/$filename
fi
How do I open a file in its default program - Linux should help you with the first part of your question:
xdg-open ~/Downloads/$filename
As mentioned in other answers, it's best not to trust the output of ls in scripts, especially if you have unusual characters like newlines in your filenames. One way to robustly get a list of filenames in a script is with the find command, and null-delimiting them into a pipe.
So to answer your question with a one-liner:
find ~/Downloads -maxdepth 1 -type f -printf "%C# %p\0" | sort -zrn | { read -d '' ts file; xdg-open "$file"; }
Breaking it down:
The find command lists files in the ~/Download directory, but doesn't descend any deeper into subdirectories. The filenames are printed with the given printf format, which lists a numerical timestamp, followed by a space, followed by a null delimiter. Note the printf format specifiers for find are different to those for regular printf
The sort command numerically sorts (-n) the resulting null-delimited list (-z) by the first field (numerical timestamp). Sort order is reversed (-r) so that the latest entry is displayed first
The read command reads the timestamp and filename of the first file in the list into the ts and file variables. -d '' tells read to use null delimiters.
The file is opened using xdg-open.
Note the read and xdg-open commands are in a curly bracket inline group, so the file variable is in scope for both.
Welcome to bash programming. :-)
First off, I'll refer you to the Bash FAQ. Great resource, lots of tips, perspectives and warnings.
One of them is the classic Parsing LS problem that your script suffers from. The basic idea is that you don't want to trust the output of the ls command, because special characters like spaces and control characters may be represented in a way that doesn't allow you to refer to the file.
You're opening the "last" file, as determined by a sort that the ls command is doing. In order to detect the most recent file without ls, we'll need some extra code. For example:
#!/bin/sh
last=0
for filename in ~/Downloads/*; do
when=$(stat -c '%Y' "$filename")
if [ $when -gt $last ]; then
last=$when
to_open="$filename"
fi
done
xdg-open "$to_open"
The idea is that we'll walk through each file in your Downloads directory and fine the one with the largest timestamp using the stat command. Then open that file using xdg-open, which may already be installed on your system because it's part of a tool set that's a dependency for a number of other applications.
If you don't have xdg-open, you can probably install it from the xdg-utils package which using whatever package management system is around for your Linux distro.
Another possibility is gnome-open, which is part of the Gnome desktop (the libgnome package, to be precise). YMMV. We'd need to know more about your distro and your desktop environment to come up with better advice.
Note that if you do want to continue selecting your application by extension, you might want to consider using a switch instead of a series of ifs:
...
case "${filename##*.}" in
txt)
leafpad "$filename"
;;
pdf)
xdg-open "$filename"
;;
*)
echo "ERROR: can't open '$filename'" >&2
;;
esac
mimeopen might be useful? There's an explanation of Mime types here.
Also - are your filetype extensions always exactly two letters, as the tail -c -3 implies? If they're of variable length, you may want a regular expression instead.
As previously mentioned, xdg-open and mimeopen may be useful and more elegant; from their manpages:
xdg-open opens a file or URL in the user's preferred application. If a URL is provided the URL will be opened in the user's preferred web browser. If a file is provided the file will be opened in the preferred application for files of that type.
[mimeopen] tries to determine the mimetype of a file and open it with the default desktop application. If no default application is configured the user is prompted with an "open with" menu in the terminal.
For more elegance in the original script, replace
filetype=$(echo -n $filename | tail -c -3)
with
filetype=${filename: -3}
and instead of the five-lines if/elif/fi structure, consider using two lines as follows.
[ $filetype == "txt" ] && leafpad ~/Downloads/$filename
[ $filetype == "pdf" ] && evince ~/Downloads/$filename

Resources