problems with source command in shell - linux

Good afternoon I have the following command to run me 'code.sh' file which I pass a parameter '$ 1' the problem is that I want to run a 'code.sh' with 'source' this is my command:
find . -name "*.txt" -type f -exec ./code.sh {} \;
And I do do well occupied
source ./code.sh

This is tricky. When you source a script you need to do it in the current shell, not in a sub-shell or child process. Executing source from find won't work because find is a child process, and so changes to environment variables will be lost.
It's rather roundabout, but you can use a loop to parse find's output and run the source commands directly in the top-level shell (using process substitution).
while read -d $'\0' fileName; do
source code.sh "$fileName"
done < <(find . -name "*.txt" -type f -print0)
Now what's with -print0 and -d $'\0', you ask? Using these two flags together is a way of making the script extra safe.† File names in UNIX are allowed to contain lots of oddball characters including spaces, tabs, and even newlines. While newlines are rare, they are indeed legal.
-print0 tells find to use NUL characters (\0) to separate the file names rather than the default newlines (\n). Doing this means file names containing \n won't mess up the loop. Using \0 as a separator works well because \0 is not a legal character in file names.
-d $'\0'&ddagger; does the same thing with read on the other side. It tells read that lines are delimited with \0 instead of \n.
† You may have seen this trick before. It's common to write find ... -print0 | xargs -0 ... to get the same sort of safety when pairing find with xargs.
&ddagger; If you're wondering about $'...': that's Bash ANSI-C quoting syntax for writing string literals containing escape codes. Dollar sign plus single quotes. You can write $'\n' for a newline or $'\t' for a tab or $'\0' for a NUL.

You won't be able to use find in this way; it will always execute a command in a separate process, not the current shell. If you are using bash 4, there's a simple alternative to using find:
shopt -s globstar
for f in **/*.txt; do
[[ -f $f ]] && source code.sh "$f"
done

Related

Bulk rename files in Unix with current date as suffix

I am trying to bulk rename all the files in the current folder with date suffix:
rename 's/(.*)/$1_$(date +%F)/' *
But that command is renaming info.txt to info.txt_1000 4 24 27 30 46 113 128 1000date +%F). I want the result to be info.txt_2016-10-13
You want $1 to be passed literally to rename, yet have $(date +%F) to be expanded by the shell. The latter won't happen when you use single quotes, only with double quotes. The solution is to use double quotes and escape $1 so the shell doesn't expand it.
rename "s/(.*)/\$1_$(date +%F)/" *
Portable Posix shell solution
Since you said "in Unix" and the rename command isn't portable (it's actually a part of the perl package), here is a solution that should work in more environments:
for file in *; do mv "$file" "${file}_$( date +%F )"; done
This creates a loop and then moves each individual file to the new name. Like the question, it uses date +%F via shell process substitution to insert the "full date" (YYYY-mm-dd). Process substitution must use double quotes (") and not single quotes (') because single quotes shut off shell interpretations (this goes for variables as well), as noted in John's answer.
Argument list too long
Your shell will complain if the directory has too many files in it, with an error like "Argument list too long." This is because * expands to the contents of the directory, all of which become arguments to the command you're running.
To solve that, create a script and then feed it through xargs as follows:
#!/bin/sh
if [ -z "$1" ]; then set *; fi # default to all files in curent directory
for file in "$#"; do mv "$file" "${file}_$( date +%F )"; done
Using ls to generate a file list isn't always wise for scripts (it can do weird things in certain contexts). Use find instead:
find . -type f -not -name '*_20??-??-??' -print0 |xargs -0 sh /path/to/script.sh
Note, that command is recursive (change that by adding -maxdepth 1 after the dot). find is extremely capable. In this example, it finds all files (-type f) that do not match the shell glob *_20??-??-?? (* matches any number of any characters, ? matches exactly one of any character, so this matches abc_2016-10-14 but not abc_2016-10-14-def). This uses find … -print0 and xargs -0 to ensure spacing is properly preserved (instead of spaces, these use the null character as the delimiter).
You can also do this with perl rename:
find . -type f -not -name '*_20??-??-??' -print0 |xargs -0 \
rename "s/(.*)/\$1_$( date +%F )/"

Calling commands in bash script with parameters which have embedded spaces (eg filenames)

I am trying to write a bash script which does some processing on music files. Here is the script so far:
#!/bin/bash
SAVEIFS=$IFS
IFS=printf"\n\0"
find `pwd` -iname "*.mp3" -o -iname "*.flac" | while read f
do
echo "$f"
$arr=($(f))
exiftool "${arr[#]}"
done
IFS=$SAVEIFS
This fails with:
[johnd:/tmp/tunes] 2 $ ./test.sh
./test.sh: line 9: syntax error near unexpected token `$(f)'
./test.sh: line 9: ` $arr=($(f))'
[johnd:/tmp/tunes] 2 $
I have tried many different incantations, none of which have worked. The bottom line is I'm trying to call a command exiftool, and one of the parameters of that command is a filename which may contain spaces. Above I'm trying to assign the filename $f to an array and pass that array to exiftool, but I'm having trouble with the construction of the array.
Immediate question is, how do I construct this array? But the deeper question is how, from within a bash script, do I call an external command with parameters which may contain spaces?
You actually did have the call-with-possibly-space-containing-arguments syntax right (program "${args[#]}"). There were several problems, though.
Firstly, $(foo) executes a command. If you want a variable's value, use $foo or ${foo}.
Secondly, if you want to append something onto an array, the syntax is array+=(value) (or, if that doesn't work, array=("${array[#]}" value)).
Thirdly, please separate filenames with \0 whenever possible. Newlines are all well and good, but filenames can contain newlines.
Fourthly, read takes the switch -d, which can be used with an empty string '' to specify \0 as the delimiter. This eliminates the need to mess around with IFS.
Fifthly, be careful when piping into while loops - this causes the loop to be executed in a subshell, preventing variable assignments inside it from taking effect outside. There is a way to get around this, however - instead of piping (command | while ... done), use process substitution (while ... done < <(command)).
Sixthly, watch your process substitutions - there's no need to use $(pwd) as an argument to a command when . will do. (Or if you really must have full paths, try quoting the pwd call.)
tl;dr
The script, revised:
while read -r -d '' f; do
echo "$f" # For debugging?
arr+=("$f")
done < <(find . -iname "*.mp3" -o -iname "*.flac" -print0)
exiftool "${arr[#]}"
Another way
Leveraging find's full capabilities:
find . -iname "*.mp3" -o -iname "*.flac" -exec exiftool {} +
# Much shorter!
Edit 1
So you need to save the output of exiftool, manipulate it, then copy stuff? Try this:
while read -r -d '' f; do
echo "$f" # For debugging?
arr+=("$f")
done < <(find . -iname "*.mp3" -o -iname "*.flac" -print0)
# Warning: somewhat misleading syntax highlighting ahead
newfilename="$(exiftool "${arr[#]}")"
newfilename="$(manipulate "$newfilename")"
cp -- "$some_old_filename" "$newfilename"
You probably will need to change that last bit - I've never used exiftool, so I don't know precisely what you're after (or how to do it), but that should be a start.
You can do this just with bash:
shopt -s globstar nullglob
a=( **/*.{mp3,flac} )
exiftool "${a[#]}"
This probably works too: exiftool **/*.{mp3,flac}

Linux command output as a parameter of another command

I would like to pass the output list of elements of a command as a parameter of another command. I have found some other pages:
How to display the output of a Linux command on stdout and also pipe it to another command?
Use output of bash command (with pipe) as a parameter for another command
but they seem to be more complex.
I just would like to copy a file to every result of a call to the Linux find command.
What is wrong here?:
find . -name myFile 2>&1 | cp /home/myuser/myFile $1
Thanks
This is what you want:
find . -name myFile -exec cp /home/myuser/myFile {} ';'
A breakdown / explanation of this:
find: invoking the find command
.: start search from current working directory.
Since no depth flags are specified, this will search recursively for all subfolders
-name myFile: find files with the explicit name myFile
-exec: for the search results, perform additional commands with them
cp /home/myuser/myFile {}: copies /home/myuser/myFile to overwrite each result returned by find to ; think of {} as where each search result goes.
';': used to separate different commands to be run after find
There are a couple of ways to solve this, depending on whether you need to worry about files with spaces or other special characters in their names.
If none of the filenames have spaces or special characters (they consist only of letters, numbers, dashes, and underscores), then the following is a simple solution that will work. You can use $(command) to execute a command, and substitute the results into the arguments of another command. The shell will split the result on spaces, tabs, or newlines, and for assign each value to $f in turn, and run the command on each value.
for f in $(find . -name myFile)
do
cp something $f
done
If you do have spaces or tabs, you could use find's -exec option. You pass -exec command args, putting {} where you want the filename to be substituted, and ending the arguments with a ;. You need to quote the {} and ; so that the shell doesn't interpret them.
find . -name myFile -exec cp something "{}" \;
Sometimes -exec is not sufficient. For example, in this question, they wanted to use Bash parameter expansion to compute the filename. In order to do that, you need to pass -exec bash -c 'your command', but then you will run into quoting problems with the {} substitution. To solve this, you can use -print0 from find to print the results delimited with null characters (which are invalid in filenames), and pipe it to a while read loop that splits parameters on nulls:
find . -name myFile -print0 | (while read -d $'\0' f; do
cp something "$f"
done)
The pipe will send the output of one program to the input of another. cp does not read from its input stream at the terminal, it merely uses the arguments on the command line.
You want to either use xargs with the pipe or find's exec argument instead of pipes.
find . -name myFile 2>&1 | xargs -I {} cp /home/myuser/myFile {}
Note: option -I {} defines {} as the place holder you could alternatively use someother placeholder if it conflicts with command to be executed.

Linux: how to replace all instances of a string with another in all files of a single type

I want to replace for example all instances of "123" with "321" contained within all .txt files in a folder (recursively).
I thought of doing this
sed -i 's/123/321/g' | find . -name \*.txt
but before possibly screwing all my files I would like to ask if it will work.
You have the sed and the find back to front. With GNU sed and the -i option, you could use:
find . -name '*.txt' -type f -exec sed -i s/123/321/g {} +
The find finds files with extension .txt and runs the sed -i command on groups of them (that's the + at the end; it's standard in POSIX 2008, but not all versions of find necessarily support it). In this example substitution, there's no danger of misinterpretation of the s/123/321/g command so I've not enclosed it in quotes. However, for simplicity and general safety, it is probably better to enclose the sed script in single quotes whenever possible.
You could also use xargs (and again using GNU extensions -print0 to find and -0 and -r to xargs):
find . -name '*.txt' -type f -print0 | xargs -0 -r sed -i 's/123/321/g'
The -r means 'do not run if there are no arguments' (so the find doesn't find anything). The -print0 and -0 work in tandem, generating file names ending with the C null byte '\0' instead of a newline, and avoiding misinterpretation of file names containing newlines, blanks and so on.
Note that before running the script on the real data, you can and should test it. Make a dummy directory (I usually call it junk), copy some sample files into the junk directory, change directory into the junk directory, and test your script on those files. Since they're copies, there's no harm done if something goes wrong. And you can simply remove everything in the directory afterwards: rm -fr junk should never cause you anguish.

Remove special characters in linux files

I have a lot of files *.java, *.xml. But a guy wrote some comments and Strings with spanish characters. I been searching on the web how to remove them.
I tried find . -type f -exec sed 's/[áíéóúñ]//g' DefaultAuthoritiesPopulator.java just as an example, how can i remove these characters from many other files in subfolders?
If that's what you really want, you can use find, almost as you are using it.
find -type f \( -iname '*.java' -or -iname '*.xml' \) -execdir sed -i 's/[áíéóúñ]//g' '{}' ';'
The differences:
The path . is implicit if no path is supplied.
This command only operates on *.java and *.xml files.
execdir is more secure than exec (read the man page).
-i tells sed to modify the file argument in place. Read the man page to see how to use it to make a backup.
{} represents a path argument which find will substitute in.
The ; is part of the find syntax for exec/execdir.
You're almost there :)
find . -type f -exec sed -i 's/[áíéóúñ]//g' {} \;
^^ ^^
From sed(1):
-i[SUFFIX], --in-place[=SUFFIX]
edit files in place (makes backup if extension supplied)
From find(1):
-exec command ;
Execute command; true if 0 status is returned. All
following arguments to find are taken to be arguments to
the command until an argument consisting of `;' is
encountered. The string `{}' is replaced by the current
file name being processed everywhere it occurs in the
arguments to the command, not just in arguments where it
is alone, as in some versions of find. Both of these
constructions might need to be escaped (with a `\') or
quoted to protect them from expansion by the shell. See
the EXAMPLES section for examples of the use of the -exec
option. The specified command is run once for each
matched file. The command is executed in the starting
directory. There are unavoidable security problems
surrounding use of the -exec action; you should use the
-execdir option instead.
tr is the tool for the job:
NAME
tr - translate or delete characters
SYNOPSIS
tr [OPTION]... SET1 [SET2]
DESCRIPTION
Translate, squeeze, and/or delete characters from standard input, writing to standard out‐
put.
-c, -C, --complement
use the complement of SET1
-d, --delete
delete characters in SET1, do not translate
-s, --squeeze-repeats
replace each input sequence of a repeated character that is listed in SET1 with a
single occurrence of that character
piping your input through tr -d áíéóúñ will probably do what you want.
Why are you trying to remove only characters with diacritic signs? It probably worth removing all characters with codes not in the range 0-127, so removal regexp will be s/[\0x80-\0xFF]//g if you're sure that your files should not contain higher ascii.

Resources