syntax error near unexpected token `<<<' - linux

I have a bash script, which runs correctly in my system:
uname -a
Linux debian 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2 (2016-04-08) x86_64 GNU/Linux
But I need it to work in a Redhat 7.2 chroot:
Linux debian 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2 (2016-04-08) x86_64 unknown
The same code executes correctly on the first one, but when running it on 7.2 first it doesn't recognize sed -i (just the -i argument). Commenting some lines y run into another problem:
bash: command substitution: line 1: syntax error near unexpected token `<<<'
The thing is, that this script need to be executed in a remote machine with a debian 7.2 (that's why i'm testing it in a chroot with the same distro), so it's no solution to install modules/upgrades to make it runnable.
A sample code:
#!/bin/bash
...
count=0
while read line; do
if echo "$line" | grep -q ')'
then
((count++))
comas=`grep -o "," <<< "$line" | wc -l`
num=`grep -o "byRef" <<< "$line" | wc -l`
...
sed -i 's/shortInteger/int/g' "test.h"
Any ideas?
Thanks.
Edit:
These are the commands that cause me trouble:
comas=`grep -o "," <<< "$line" | wc -l`
sed -i 's/shortInteger/int/g' "test.h"
Edit 2:
GNU bash, version 2.05.8 --> "here string" (<<<) doesn't exist
grep (GNU grep) 2.4.2 --> -o option doesn't exist
GNU sed version 3.02 --> -i option doesn't exist

Finally I got it with the following...
For the versions i have which i can't update:
sed solution
sed 's/shortInteger/int/g' "test.h" > test.temp.h;
mv "test.temp.h" "test.h"
This was pointed out by #sorontar. Thanks a lot!
To find special characters or substrings inside a string
comas=$(echo "$line" | tr " " "\n" | grep -c ",")
According to my source files, there's a pattern so after any comma I have an space. So tr " " "\n" separates substrings between spaces making newlines, then I can use grep -c "," to count each line with a comma inside.

Related

sed throws bad flag in substitute command: 'l' in Mac [duplicate]

I've successfully used the following sed command to search/replace text in Linux:
sed -i 's/old_link/new_link/g' *
However, when I try it on my Mac OS X, I get:
"command c expects \ followed by text"
I thought my Mac runs a normal BASH shell. What's up?
EDIT:
According to #High Performance, this is due to Mac sed being of a different (BSD) flavor, so my question would therefore be how do I replicate this command in BSD sed?
EDIT:
Here is an actual example that causes this:
sed -i 's/hello/gbye/g' *
If you use the -i option you need to provide an extension for your backups.
If you have:
File1.txt
File2.cfg
The command (note the lack of space between -i and '' and the -e to make it work on new versions of Mac and on GNU):
sed -i'.original' -e 's/old_link/new_link/g' *
Create 2 backup files like:
File1.txt.original
File2.cfg.original
There is no portable way to avoid making backup files because it is impossible to find a mix of sed commands that works on all cases:
sed -i -e ... - does not work on OS X as it creates -e backups
sed -i'' -e ... - does not work on OS X 10.6 but works on 10.9+
sed -i '' -e ... - not working on GNU
Note Given that there isn't a sed command working on all platforms, you can try to use another command to achieve the same result.
E.g., perl -i -pe's/old_link/new_link/g' *
I believe on OS X when you use -i an extension for the backup files is required. Try:
sed -i .bak 's/hello/gbye/g' *
Using GNU sed the extension is optional.
This works with both GNU and BSD versions of sed:
sed -i'' -e 's/old_link/new_link/g' *
or with backup:
sed -i'.bak' -e 's/old_link/new_link/g' *
Note missing space after -i option! (Necessary for GNU sed)
Had the same problem in Mac and solved it with brew:
brew install gnu-sed
and use as
gsed SED_COMMAND
you can set as well set sed as alias to gsed (if you want):
alias sed=gsed
Or, you can install the GNU version of sed in your Mac, called gsed, and use it using the standard Linux syntax.
For that, install gsed using ports (if you don't have it, get it at http://www.macports.org/) by running sudo port install gsed. Then, you can run sed -i 's/old_link/new_link/g' *
Your Mac does indeed run a BASH shell, but this is more a question of which implementation of sed you are dealing with. On a Mac sed comes from BSD and is subtly different from the sed you might find on a typical Linux box. I suggest you man sed.
Insead of calling sed with sed, I do ./bin/sed
And this is the wrapper script in my ~/project/bin/sed
#!/bin/bash
if [[ "$OSTYPE" == "darwin"* ]]; then
exec "gsed" "$#"
else
exec "sed" "$#"
fi
Don't forget to chmod 755 the wrapper script.
Sinetris' answer is right, but I use this with find command to be more specific about what files I want to change. In general this should work (tested on osx /bin/bash):
find . -name "*.smth" -exec sed -i '' 's/text1/text2/g' {} \;
In general when using sed without find in complex projects is less efficient.
I've created a function to handle sed difference between MacOS (tested on MacOS 10.12) and other OS:
OS=`uname`
# $(replace_in_file pattern file)
function replace_in_file() {
if [ "$OS" = 'Darwin' ]; then
# for MacOS
sed -i '' -e "$1" "$2"
else
# for Linux and Windows
sed -i'' -e "$1" "$2"
fi
}
Usage:
$(replace_in_file 's,MASTER_HOST.*,MASTER_HOST='"$MASTER_IP"',' "./mysql/.env")
Where:
, is a delimeter
's,MASTER_HOST.*,MASTER_HOST='"$MASTER_IP"',' is pattern
"./mysql/.env" is path to file
As the other answers indicate, there is not a way to use sed portably across OS X and Linux without making backup files. So, I instead used this Ruby one-liner to do so:
ruby -pi -e "sub(/ $/, '')" ./config/locales/*.yml
In my case, I needed to call it from a rake task (i.e., inside a Ruby script), so I used this additional level of quoting:
sh %q{ruby -pi -e "sub(/ $/, '')" ./config/locales/*.yml}
Here's how to apply environment variables to template file (no backup need).
1. Create template with {{FOO}} for later replace.
echo "Hello {{FOO}}" > foo.conf.tmpl
2. Replace {{FOO}} with FOO variable and output to new foo.conf file
FOO="world" && sed -e "s/{{FOO}}/$FOO/g" foo.conf.tmpl > foo.conf
Working both macOS 10.12.4 and Ubuntu 14.04.5
Here is an option in bash scripts:
#!/bin/bash
GO_OS=${GO_OS:-"linux"}
function detect_os {
# Detect the OS name
case "$(uname -s)" in
Darwin)
host_os=darwin
;;
Linux)
host_os=linux
;;
*)
echo "Unsupported host OS. Must be Linux or Mac OS X." >&2
exit 1
;;
esac
GO_OS="${host_os}"
}
detect_os
if [ "${GO_OS}" == "darwin" ]; then
sed -i '' -e ...
else
sed -i -e ...
fi
sed -ie 's/old_link/new_link/g' *
Works on both BSD & Linux with gnu sed

Linux 'cut' command line and replace

I need to create some text using cut command and replace with space, on Linux terminal.
Examples:
Linux
inux
nux
ux
x
This is my bash script.
#!/bin/bash
INPUT=$#
SIZE=$(echo $INPUT|wc -c)
let $((SIZE--))
for i in $(seq 1 $SIZE);
do echo $INPUT | cut -c ${i}-${SIZE} ;
done
and i have failed to create some text like :
Linux
inux
nux
ux
x
This should do the trick:
#!/bin/bash
INPUT="$#"
SIZE=${#INPUT}
for ((i=0; i < ${SIZE}; i++)); do
echo "${INPUT}"
INPUT="${INPUT:0:${i}} ${INPUT:$((i+1)):${SIZE}}"
#INPUT="$(echo "$INPUT" | sed "s/^\(.\{${i}\}\)./\1 /")"
done
I added a sed option in the comment, although it creates a sub-process when you don't really have to.

Bash: how to cleanly log processed lines of ssh/ bash output?

I wrote a linux bash script with tee and grep to log and timestamp the actions I take in my various ssh sessions. It works, but the logged lines are mixed together sometimes and are full of control characters. How can I properly escape control and other characters not visible in the original sessions and log each line separately?
I am learning bash and the linux interface, so any other suggestions to improve the script would be extremely welcome!
Here is my script (used as a wrapper for the ssh command):
#! /bin/bash
logfile=~/logs/ssh.log
desc="sshlog ${#}"
tab="\t"
format_line() {
while IFS= read -r line; do
echo -e "$(date +"%Y-%m-%d %H:%M:%S %z")${tab}${desc}${tab}${line}"
done
}
echo "[START]" | format_line >> ${logfile}
# grep is used to filter out command line output while keeping commands
ssh "$#" | tee >(grep -e '\#.*\:.*\$' --color=never --line-buffered | format_line >> ${logfile})
echo "[END]" | format_line >> ${logfile}
And here is a screenshot of the jarbled output in the log file:
A note on the solution: Tiago's answer took care of the nonprinting characters very well. Unfortunately, I just realized that the jumbling is being caused by backspaces and using the up and down keys for command completion. That is, the characters are being piped to grep as soon as they appear, and not line-by-line. I will have to ask about this in another question.
Update: I figured out a way to (almost always) handle up/down completion, backspace completion, and control characters.
You can remove those characters with:
perl -lpe 's/[^[:print:]]//g'
Not filtered:
perl -e 'for($i=0; $i<=255; $i++){print chr($i);}' | cat -A
^#^A^B^C^D^E^F^G^H^I$
^K^L^M^N^O^P^Q^R^S^T^U^V^W^X^Y^Z^[^\^]^^^_ !"#$%&'()*+,-./0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~^?M-^#M-^AM-^BM-^CM-^DM-^EM-^FM-^GM-^HM-^IM-^JM-^KM-^LM-^MM-^NM-^OM-^PM-^QM-^RM-^SM-^TM-^UM-^VM-^WM-^XM-^YM-^ZM-^[M-^\M-^]M-^^M-^_M- M-!M-"M-#M-$M-%M-&M-'M-(M-)M-*M-+M-,M--M-.M-/M-0M-1M-2M-3M-4M-5M-6M-7M-8M-9M-:M-;M-<M-=M->M-?M-#M-AM-BM-CM-DM-EM-FM-GM-HM-IM-JM-KM-LM-MM-NM-OM-PM-QM-RM-SM-TM-UM-VM-WM-XM-YM-ZM-[M-\M-]M-^M-_M-`M-aM-bM-cM-dM-eM-fM-gM-hM-iM-jM-kM-lM-mM-nM-oM-pM-qM-rM-sM-tM-uM-vM-wM-xM-yM-zM-{M-|M-}M-~M-^?
Filtered:
perl -e 'for($i=0; $i<=255; $i++){print chr($i);}' | perl -lpe 's/[^[:print:]]//g' | cat -A
$
!"#$%&'()*+,-./0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~$
Explanation:
I am printing the whole ASCII table with:
perl -e 'for($i=0; $i<=255; $i++){print chr($i);}'
I am identifying non printable chars with:
cat -A
I am filtering non printable chars with:
perl -lpe 's/[^[:print:]]//g'
Edit: It seems to me that you need to remove ANSI color chars:
Example:
perl -MTerm::ANSIColor -e 'print colored("yellow on_magenta","yellow on_magenta"),"\n"'| sed -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g" | perl -lpe 's/[^[:print:]]//g'
Adapting to your code:
format_line() {
while IFS= read -r line; do
line=$(sed -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g" <<< "$line")
line=$(perl -lpe 's/[^[:print:]]//g' <<< "$line")
echo -e "$(date +"%Y-%m-%d %H:%M:%S %z")${tab}${desc}${tab}${line}"
done
}
I also edited your grep command:
ssh "$#" | tee >(grep -Po '(?<=\$).*' --color=never --line-buffered | format_line >> ${logfile})
Below the output of my test:
2014-06-26 10:11:10 +0100 sshlog tiago#localhost [START]
2014-06-26 10:11:15 +0100 sshlog tiago#localhost whoami
2014-06-26 10:11:16 +0100 sshlog tiago#localhost exit
2014-06-26 10:11:16 +0100 sshlog tiago#localhost [END]
While writing your own script is a great learning experience, you can also use script to record everything printed on your terminal to a file.
The resulting file will still contains the control characters but there are multiple ways to get rid of them as described in How to clean up output of linux 'script' command.

command not working as expected if run via /bin/sh -c

I have to concatenate a set of files. Directory structure is like this:
root/features/xxx/multiple_files... -> root/xxx/single_file
what i have written (and it works fine):
for dirname in $(ls -d root/features/*|awk -F/ '{print $NF}');do;mkdir root/${dirname};cat root/features/${dirname}/* > root/${dirname}/final.txt;done
But when i run the same thing via sh shell
/bin/sh -c "for dirname in $(ls -d root/features/*|awk -F/ '{print $NF}');do;mkdir root/${dirname};cat root/features/${dirname}/* > root/${dirname}/final.txt;done"
it gives me errors:
/bin/sh: -c: line 1: syntax error near unexpected token `201201000'
/bin/sh: -c: line 1: `201201000'
My process always appends /bin/sh -c before running any commands. Any suggestions what might be going wrong here? Any alternate ways? I have spent a really long time on this ,without making much headway!
EDIT:
`ls -d root/features/*|awk -F/ '{print $NF}' returns
201201
201201000
201201001
201201002
201201003
201201004
201201005
201201006
201201007
201202000
201205000
201206000
201207000
201207001
201207002
Always use sh -c 'cmd1 | cmd2' with single quotes.
Always use sh -eu -xv -c 'cmd1 | cmd2' to debug.
Always use bash -c 'cmd1 | cmd2' if your code is Bash-specific (cf. process substitution, ...).
Remove ; after do in for ... ; do; mkdir ....
Escape possible single quotes within single quotes like so: ' --> '\''.
(And sometimes just formatting your code clarifies a lot.)
Applied to your command this should look somewhat like this ...
# test version
/bin/sh -c '
for dirname in $(ls -d /* | awk -F/ '\''{print $NF}'\''); do
printf "%s\n" "mkdir root/${dirname}";
printf "%s\n" "cat root/features/${dirname}/* > root/${dirname}/final.txt";
echo
done
' | nl
# test version using 'printf' instead of 'ls'
sh -c '
printf "%s\000" /*/ | while IFS="" read -r -d "" file; do
dirname="$(basename "$file")"
printf "%s\n" "mkdir root/${dirname}";
printf "%s\n" "cat root/features/${dirname}/* > root/${dirname}/final.txt";
echo
done
' | nl
I got this to run in the little test environment I set up on my box. Turns out it didn't like the double quotes. The issue I ran into was the quotes around the awk statement...if you wrap it in double quotes it prints the whole thing.....I used cut to get the desired result, but my guess is you'll have to change the -f arg to 3 instead of 2..I think.
/bin/sh -c 'for dirname in $(ls -d sh_test/* | awk -F/ '\''{print $NF}'\''); do mkdir sh_test_root/${dirname}; cat sh_test/${dirname}/* > sh_test_root/${dirname}/final.txt;done'
edit: Tested edit proposed by nadu and it works fine. The above reflects that change.

How to get the command line args passed to a running process on unix/linux systems?

On SunOS there is pargs command that prints the command line arguments passed to the running process.
Is there is any similar command on other Unix environments?
There are several options:
ps -fp <pid>
cat /proc/<pid>/cmdline | sed -e "s/\x00/ /g"; echo
There is more info in /proc/<pid> on Linux, just have a look.
On other Unixes things might be different. The ps command will work everywhere, the /proc stuff is OS specific. For example on AIX there is no cmdline in /proc.
This will do the trick:
xargs -0 < /proc/<pid>/cmdline
Without the xargs, there will be no spaces between the arguments, because they have been converted to NULs.
Full commandline
For Linux & Unix System you can use ps -ef | grep process_name to get the full command line.
On SunOS systems, if you want to get full command line, you can use
/usr/ucb/ps -auxww | grep -i process_name
To get the full command line you need to become super user.
List of arguments
pargs -a PROCESS_ID
will give a detailed list of arguments passed to a process. It will output the array of arguments in like this:
argv[o]: first argument
argv[1]: second..
argv[*]: and so on..
I didn't find any similar command for Linux, but I would use the following command to get similar output:
tr '\0' '\n' < /proc/<pid>/environ
You can use pgrep with -f (full command line) and -l (long description):
pgrep -l -f PatternOfProcess
This method has a crucial difference with any of the other responses: it works on CygWin, so you can use it to obtain the full command line of any process running under Windows (execute as elevated if you want data about any elevated/admin process). Any other method for doing this on Windows is more awkward ( for example ).
Furthermore: in my tests, the pgrep way has been the only system that worked to obtain the full path for scripts running inside CygWin's python.
On Linux
cat /proc/<pid>/cmdline
outputs the commandline of the process <pid> (command including args) each record terminated by a NUL character.
A Bash Shell Example:
$ mapfile -d '' args < /proc/$$/cmdline
$ echo "#${#args[#]}:" "${args[#]}"
#1: /bin/bash
$ echo $BASH_VERSION
5.0.17(1)-release
Another variant of printing /proc/PID/cmdline with spaces in Linux is:
cat -v /proc/PID/cmdline | sed 's/\^#/\ /g' && echo
In this way cat prints NULL characters as ^# and then you replace them with a space using sed; echo prints a newline.
Rather than using multiple commands to edit the stream, just use one - tr translates one character to another:
tr '\0' ' ' </proc/<pid>/cmdline
ps -eo pid,args prints the PID and the full command line.
You can simply use:
ps -o args= -f -p ProcessPid
In addition to all the above ways to convert the text, if you simply use 'strings', it will make the output on separate lines by default. With the added benefit that it may also prevent any chars that may scramble your terminal from appearing.
Both output in one command:
strings /proc//cmdline /proc//environ
The real question is... is there a way to see the real command line of a process in Linux that has been altered so that the cmdline contains the altered text instead of the actual command that was run.
On Solaris
ps -eo pid,comm
similar can be used on unix like systems.
On Linux, with bash, to output as quoted args so you can edit the command and rerun it
</proc/"${pid}"/cmdline xargs --no-run-if-empty -0 -n1 \
bash -c 'printf "%q " "${1}"' /dev/null; echo
On Solaris, with bash (tested with 3.2.51(1)-release) and without gnu userland:
IFS=$'\002' tmpargs=( $( pargs "${pid}" \
| /usr/bin/sed -n 's/^argv\[[0-9]\{1,\}\]: //gp' \
| tr '\n' '\002' ) )
for tmparg in "${tmpargs[#]}"; do
printf "%q " "$( echo -e "${tmparg}" )"
done; echo
Linux bash Example (paste in terminal):
{
## setup intial args
argv=( /bin/bash -c '{ /usr/bin/sleep 10; echo; }' /dev/null 'BEGIN {system("sleep 2")}' "this is" \
"some" "args "$'\n'" that" $'\000' $'\002' "need" "quot"$'\t'"ing" )
## run in background
"${argv[#]}" &
## recover into eval string that assigns it to argv_recovered
eval_me=$(
printf "argv_recovered=( "
</proc/"${!}"/cmdline xargs --no-run-if-empty -0 -n1 \
bash -c 'printf "%q " "${1}"' /dev/null
printf " )\n"
)
## do eval
eval "${eval_me}"
## verify match
if [ "$( declare -p argv )" == "$( declare -p argv_recovered | sed 's/argv_recovered/argv/' )" ];
then
echo MATCH
else
echo NO MATCH
fi
}
Output:
MATCH
Solaris Bash Example:
{
## setup intial args
argv=( /bin/bash -c '{ /usr/bin/sleep 10; echo; }' /dev/null 'BEGIN {system("sleep 2")}' "this is" \
"some" "args "$'\n'" that" $'\000' $'\002' "need" "quot"$'\t'"ing" )
## run in background
"${argv[#]}" &
pargs "${!}"
ps -fp "${!}"
declare -p tmpargs
eval_me=$(
printf "argv_recovered=( "
IFS=$'\002' tmpargs=( $( pargs "${!}" \
| /usr/bin/sed -n 's/^argv\[[0-9]\{1,\}\]: //gp' \
| tr '\n' '\002' ) )
for tmparg in "${tmpargs[#]}"; do
printf "%q " "$( echo -e "${tmparg}" )"
done; echo
printf " )\n"
)
## do eval
eval "${eval_me}"
## verify match
if [ "$( declare -p argv )" == "$( declare -p argv_recovered | sed 's/argv_recovered/argv/' )" ];
then
echo MATCH
else
echo NO MATCH
fi
}
Output:
MATCH
If you want to get a long-as-possible (not sure what limits there are), similar to Solaris' pargs, you can use this on Linux & OSX:
ps -ww -o pid,command [-p <pid> ... ]
try ps -n in a linux terminal. This will show:
1.All processes RUNNING, their command line and their PIDs
The program intiate the processes.
Afterwards you will know which process to kill

Resources