I will post my script here
#!/bin/tcsh
echo 'Running'
set fileN = '2021-02-07-0448-04S.JKH_RR.SAC'
set fileE = '2021-02-07-0448-04S.JKH_RR_BHE.SAC'
set compR=BHR
set compT=BHT
set compR_name=BHR.SAC
set compT_name=BHT.SAC
set fileN_rot = `echo $fileN | awk '{split($0,a,".SAC"); print a[1]}'`
set fileE_rot = `echo $fileE | awk '{split($0,a,".SAC"); print a[1]}'`
echo 'output1'
echo $fileN
echo $fileE
echo 'output2'
echo $fileN_rot
echo $fileE_rot
echo 'output3'
echo $fileE_rot-$compR_name
echo $fileN_rot-$compT_name
The output is:
Running
output1
2021-02-07-0448-04S.JKH_RR_BHN.SAC 2021-02-07-0448-04S.JKH_RR_BHE.SAC
output2
2021-02-07-0448-04S.JKH_RR_BHN
2021-02-07-0448-04S.JKH_RR_BHE
output3
2021-02-07-0448-04S.JKH_RR_BHN
-BHR.SAC
2021-02-07-0448-04S.JKH_RR_BHE-BHT.SAC
echo $fileE_rot-$compR_name giving wrong output.
Here the out is copy-pasted from the output file,so -BHR.SAC showing in new line.
But in shell terminal it is showing -BHR.SAC07-0448-04S.JKH_RR_BHN.
I find it strange.
Looks like you have some control chars in your strings. Run cat -Ev script to see them and if you see ^Ms in the output then read Why does my tool output overwrite itself and how do I fix it? for how to deal with them.
Don't write scripts in [t]csh, though, as it wasn't designed for that. Writing a script in csh is like digging a hole with a toothbrush - sure you CAN kinda get there in the end but there are better alternatives. See https://www.google.com/search?q=google+csh+why+not.
Having said that, it's not obvious why you're trying to manipulate text in any shell. Shells exist to manipulate (create/destroy) files and processes and sequence calls to tools. The people who invented shell also invented tools such as awk for shell to call when appropriate to manipulate text. So, here is how to really write a shell script to do what you want (the shell part is to call awk to manipulate the text):
$ cat tst.sh
#!/usr/bin/env bash
awk '
BEGIN {
print "Running"
fileN = "2021-02-07-0448-04S.JKH_RR.SAC"
fileE = "2021-02-07-0448-04S.JKH_RR_BHE.SAC"
compR = "BHR"
compT = "BHT"
compR_name = "BHR.SAC"
compT_name = "BHT.SAC"
fileN_rot = fileN
sub(/\.SAC$/,"",fileN_rot)
fileE_rot = fileE
sub(/\.SAC$/,"",fileE_rot)
print "output1"
print fileN
print fileE
print "output2"
print fileN_rot
print fileE_rot
print "output3"
print fileE_rot "-" compR_name
print fileN_rot "-" compT_name
}
'
$ ./tst.sh
Running
output1
2021-02-07-0448-04S.JKH_RR.SAC
2021-02-07-0448-04S.JKH_RR_BHE.SAC
output2
2021-02-07-0448-04S.JKH_RR
2021-02-07-0448-04S.JKH_RR_BHE
output3
2021-02-07-0448-04S.JKH_RR_BHE-BHR.SAC
2021-02-07-0448-04S.JKH_RR-BHT.SAC
or if there really was some reason to want to do it directly in a shell (e.g. this code is in some loop manipulating files named based on these variables) then:
$ cat tst.sh
#!/usr/bin/env bash
fileN='2021-02-07-0448-04S.JKH_RR.SAC'
fileE='2021-02-07-0448-04S.JKH_RR_BHE.SAC'
compR='BHR'
compT='BHT'
compR_name='BHR.SAC'
compT_name='BHT.SAC'
fileN_rot="${fileN%*.SAC}"
fileE_rot="${fileE%*.SAC}"
echo 'output1'
echo "$fileN"
echo "$fileE"
echo 'output2'
echo "$fileN_rot"
echo "$fileE_rot"
echo 'output3'
echo "${fileE_rot}-${compR_name}"
echo "${fileN_rot}-${compT_name}"
$ ./tst.sh
output1
2021-02-07-0448-04S.JKH_RR.SAC
2021-02-07-0448-04S.JKH_RR_BHE.SAC
output2
2021-02-07-0448-04S.JKH_RR
2021-02-07-0448-04S.JKH_RR_BHE
output3
2021-02-07-0448-04S.JKH_RR_BHE-BHR.SAC
2021-02-07-0448-04S.JKH_RR-BHT.SAC
Related
Trying to execute remotely a bunch of commands in a perl script
This looks like that :
$CMD1 = "/usr/sbin/mminfo -av -q \"savetime>'-1 day 18:00:00',savetime<'17:59:59'\" -r \"ssid,totalsize,nfiles,pool\"|grep \"xxxxx\"|/usr/bin/awk '!seen[\$1]++'";
print Dumper $CMD1;
$CMD = "/usr/bin/ssh xxxx\#$SRV \'$CMD1\' 2>&1";
print Dumper $CMD;
But I still have problem with the $1 in the awk command, It seems to be cancelled when running.
What I can see :
$VAR1 = '/usr/sbin/mminfo -av -q "savetime>\'-1 day 18:00:00\',savetime<\'17:59:59\'" -r "ssid,totalsize,nfiles,pool"|grep "xxxxxx"|/usr/bin/awk \'!seen[$1]++\'';
$VAR1 = '/usr/bin/ssh xxxxx#\'xxxxxx\' \'/usr/sbin/mminfo -av -q "savetime>\'-1 day 18:00:00\',savetime<\'17:59:59\'" -r "ssid,totalsize,nfiles,pool"|grep "xxxxx"|/usr/bin/awk \'!seen[$1]++\'\' 2>&1';
So the '$1' of the awk command is passed correctly to the remote but when running :
#RESU = `$CMD`;
print Dumper #RESU;
I can see that my $1 is missing (or interpretated by the remote shell as a null value) :
$VAR1 = 'awk: ligne de commande:1: !seen[]++
';
$VAR2 = 'awk: ligne de commande:1: ^ syntax error
';
$VAR3 = 'awk: ligne de commande:1: error: expression indice non valide
';
I've tried many things like quoting or double-quoting the string, creating the string with perl 'qq' function, putting value of $CMD1 directly in $CMD and escaping quotes but no way.
And of course, my awk is piped to another awk (not provided here).
I don't want a solution which runs awk localy since I've millions lines returned from the 'mminfo' command.
Any clue (or a better way to do that !) ?
You might want to break it into smaller pieces for readability, and use the multi-arg invocation of system to avoid perl having to spawn a shell. The q() function goes a long way toward avoiding quoting hell.
$mminfo = q{/usr/sbin/mminfo -av -q "savetime>'-1 day 18:00:00',savetime<'17:59:59'" -r "ssid,totalsize,nfiles,pool"};
$awk = q{/usr/bin/awk '/xxxxx/ && !seen[$1]++');
print Dumper [$mminfo, $awk];
#cmd = ( "/usr/bin/ssh", "xxxx\#$SRV", "$mminfo | $awk" );
print Dumper \#cmd;
system #cmd;
Even if you can not use modules in your final environment, you may be able to use them in your local machine. In that case you can use them to quote the command programmatically and then just copy and paste the quoted string into the script you are developing. For instance:
use strict;
use warnings;
use Net::OpenSSH;
my $quoted_cmd1 = Net::OpenSSH->shell_quote('/usr/sbin/mminfo', '-av',
-q => q(savetime>'-1 day 18:00:00',savetime<'17:59:59'),
-r => 'ssid,totalsize,nfiles,pool',
\\'|',
'grep', 'xxxxx',
\\'|',
'/usr/bin/awk', '!seen[$1]++');
my $SRV = "foo";
my $quoted_cmd = Net::OpenSSH->shell_quote('/usr/bin/ssh', "xxxx\#$SRV",
$quoted_cmd1,
\\'2>&1');
print "$quoted_cmd\n";
Which outputs...
/usr/bin/ssh xxxx#foo '/usr/sbin/mminfo -av -q '\''savetime>'\''\'"''"'-1 day 18:00:00'\''\'"''"',savetime<'\''\'\''17:59:59\'\'' -r ssid,totalsize,nfiles,pool | grep xxxxx | /usr/bin/awk '\''!seen[$1]++'\' 2>&1
I have the while loop below that is using the variable pov. I need each line set to a variable that can be called in a connection string, but cant figure out how to create a loop to feed each line separately.
wanting if [ ! -z $pov] then .../shell to execute using $pov... fi for each line in seq_fy.txt
What I am working with:
cat seq_fy.txt | while read pov; do
echo "pov$((n++))=$pov"
###wanting "if [ ! -z $pov] then <execute> fi" for each line in seq_fy.txt
done
$ ./while_loop_only
pov0=
pov1=SPT_SEQ_010,FY15
pov2=SPT_SEQ_010,FY16
pov3=SPT_SEQ_020,FY15
pov4=SPT_SEQ_020,FY16
pov5=SPT_SEQ_030,FY15
pov6=SPT_SEQ_030,FY16
pov7=SPT_SEQ_040,FY15
pov8=SPT_SEQ_040,FY16
pov9=SPT_SEQ_050,FY15
pov10=SPT_SEQ_050,FY16
Looks like I have been way over-analyzing this....
basically I don't need anything with POV or exporting variables at all, just simply put the command inside the while loop and fed in $line, and seems to work as expected
cat fiename.txt | while read line; do
$owsdirectory"/hpm_ws_client.sh" processCalcScriptOptions "$appName" "$line" "$layers" "$stages" "" "$stages" "$stages" FALSE > "$appLogFolder""/""$line""_ProcessID.log"
done
I have a .txt input file that is the product of a printf defining each line as POV(n)="sequenceX,yearY"
cat output.PA
POV01="SEQ010,FY15"
POV02="SEQ010,FY16"
POV03="SEQ020,FY15"
POV04="SEQ020,FY16"
How can I source this file so that I can export each POV as the variable value of sequence and fy, respectively for the given line?
export POV(n)="$seq,$fy"
the printf I have used to get tho this point is as follows:
cat step1
while read -r seq fy; do
printf 'POV%02d="%s,%s"\n' ${counter} ${seq} ${fy}
(( counter = counter + 1 ))
done <test_scenario_02.txt > output.PA
If I source output.PA I get the following:
./step2
POV00=YEAR,
POV01=SEQ010,FY15
POV02=SEQ010,FY16
POV03=SEQ020,FY15
POV04=SEQ020,FY16
POV05=SEQ030,FY15
POV06=SEQ030,FY16
POV07=SEQ030,FY15
POV08=SEQ030,FY16
POV09=SEQ040,FY15
POV10=SEQ040,FY16
POV11=SEQ050,FY15
POV12=SEQ050,FY16
$ cat step2
. ./output.PA
set | grep "^POV"
It is not at all clear what you want, but it seems like you are trying to create an array variable that holds all the value in output.PA. You probably don't need to do that, but this should work:
$ pov=($(sed -e 's/[^"]*"//' -e 's/"$//' output.PA))
$ echo ${pov[0]}
SEQ010,FY15
$ echo ${pov[1]}
SEQ010,FY16
$ echo ${pov[2]}
SEQ020,FY15
Hello: I have a lot of files called test-MR3000-1.txt to test-MR4000-1.nt, where the number in the name changes by 100 (i.e. I have 11 files),
$ ls test-MR*
test-MR3000-1.nt test-MR3300-1.nt test-MR3600-1.nt test-MR3900-1.nt
test-MR3100-1.nt test-MR3400-1.nt test-MR3700-1.nt test-MR4000-1.nt
test-MR3200-1.nt test-MR3500-1.nt test-MR3800-1.nt
and also a file called resonancia.kumac which in a couple on lines contains the string XXXX.
$ head resonancia.kumac
close 0
hist/delete 0
vect/delete *
h/file 1 test-MRXXXX-1.nt
sigma MR=XXXX
I want to execute a bash file which substitutes the strig XXXX in a file by a set of numbers obtained from the command ls *MR* | cut -b 8-11.
I found a post in which there are some suggestions. I try my own code
for i in `ls *MR* | cut -b 8-11`; do
sed -e "s/XXXX/$i/" resonancia.kumac >> proof.kumac
done
however, in the substitution the numbers are surrounded by sigle qoutes (e.g. '3000').
Q: What should I do to avoid the single quote in the set of numbers? Thank you.
This is a reproducer for the environment described:
for ((i=3000; i<=4000; i+=100)); do
touch test-MR${i}-1.nt
done
cat >resonancia.kumac <<'EOF'
close 0
hist/delete 0
vect/delete *
h/file 1 test-MRXXXX-1.nt
sigma MR=XXXX
EOF
This is a script which will run inside that environment:
content="$(<resonancia.kumac)"
for f in *MR*; do
substring=${f:7:3}
echo "${content//XXXX/$substring}"
done >proof.kumac
...and the output looks like so:
close 0
hist/delete 0
vect/delete *
h/file 1 test-MR300-1.nt
sigma MR=300
There are no quotes anywhere in this output; the problem described is not reproduced.
or if it could be perl:
#!/usr/bin/perl
#ls = glob('*MR*');
open (FILE, 'resonancia.kumac') || die("not good\n");
#cont = <FILE>;
$f = shift(#ls);
$f =~ /test-MR([0-9]*)-1\.nt/;
$nr = $1;
#out = ();
foreach $l (#cont){
if($l =~ s/XXXX/$nr/){
$f = shift(#ls);
$f =~ /test-MR([0-9]*)-1\.nt/;
$nr = $1;
}
push #out, $l;
}
close FILE;
open FILE, '>resonancia.kumac' || die("not good\n");
print FILE #out;
That would replace the first XXXX with the first filename, what seemed to be the question before change.
I am having problems concatenate two strings in BASH (I am using Cygwin)
When I am doing it step by step in the cygwin window, it works.
i.e by defining dt=2012-12-31 and c=.txt explicitly and then concatenating in filename=${dt}${c}.
It doesn't seem to work when i am running it through my script where these variables are defined by cutting and assigning values from content of a file.
Though the variables are assigned with the same values as above, the concatenation in this case doesn't work.
instead of 2012-12-31.txt i am getting .txt-12-31 as result.
The code is:
for x in {0..11}
do
IFS=$'\n'
filename=date_list.txt
file=($(<"$filename"))
IFS=$'\t\n'
dt=${file[$x]}
echo $dt
for y in {0..85}
do
IFS=$'\n'
filename=SQL_Mnemonics.txt
file=($(<"$filename"))
IFS=$'\t\n'
Mn=${file[$y]}
for k in {3..502}
do
IFS=$'\n'
c=.txt
filename=${dt}${c}
file=($(<"$filename"))
IFS=$'\t\n'
echo ${file[$k]} > temp_file.txt
cusip=`cut -c11-19 temp_file.txt`
result=$(sh ctest.sh $Mn, $dt, $cusip)
echo "$result" > tmp1.txt
t1=`cut -c18-40 tmp1.txt`
echo $t1 | sed 's/[[:space:]]//g' > temp_file.txt
cat tst.txt | sed 's/-----//g' >> ForFame/${Mn}.${dt}.txt
done
done
done