I'm trying to wrap my head around nested command substitution. I tried nesting backticks but obviously that doesn't work. How would you nest the following without declaring the ${host} variable first?
host=$(hostname|cut -c1-14);for id in `aladmin list|grep ${host}|awk '{print $2}'`;do aladmin delete ${id};done
The command lists all alarms on a server, greps for the first 14 characters of the hostname and then deletes the alarm with the alarm ID found in field 2 by awk.
My question does in no way duplicate the 'hello' in previous post:
How to properly nest Bash backticks
Thanks in advance,
Bjoern
Do everything in awk. There's no need to use the for loop and the grep, etc. There are better ways than this, but as a first approximation, try something like:
aladmin list | awk "/$(hostname | cut -c1-14)/"'{ print "aladmin delete " $2 | "sh"}'
Related
This is my txt file
type=0
vcpu_count=10
maste=0
h=0
p=0
memory=23.59
num=2
I want to get the vcpu_count and memory values and store it in some array through perl(automating script) .
awk -F'=' '/vcpu_count/{printf "\n",$1}' .vmConfig.txt
i am using this command just to test on terminal.but am getting a blank line. How do i do it. I need to get these two values and check for condition
If you are using Perl anyway, just use Perl for this too.
my %array;
open ($config, "<", ".vmConfig.txt") or die "$0: Could not open .vmConfig.txt: $!\n";
while (<$config>) {
next unless /^\s*(vcpu_count|memory)\s*=\s*(.*?)\s*\n/;
$array{$1} = $2;
}
close($config);
If you don't want the result to be an associative array (aka hash), refactoring should be relatively easy.
Following awk may help you on same.
Solution 1st:
awk '/vcpu_count/{print;next} /memory/{print}' Input_file
Output will be as follows:
vcpu_count=10
memory=23.59
Solution 2nd:
In case you want to print the values on a single line using printf then following may help you on same:
awk '/vcpu_count/{val=$0;next} /memory/{printf("%s AND %s\n",val,$0)}' Input_file
Output will be as follows:
vcpu_count=10 AND memory=23.59
when you use awk -F'=' '/vcpu_count/{printf "\n",$1}' .vmConfig.txt there are a couple of mistakes. Firstly, printf "\n" will only ever print a new line, as you have found. You need to add a format specifier - something like printf "%s\n", $2 will treat field 2 as a string and add it into the printed string. Checking out man printf at the command line will explain a bit more,.
Secondly, as I changed there, when you used $1 you were using the first field, which is the key in this case (while $0 is the whole line.)
Triplees solution is probably the most appropriate, but if there is a particular reason to start awk to perform this before perl, the following may help.
As you have done, it splits on =, but then outputs as csv, which you can change as appropriate. Even if input lines are not always in same order, will output in predictable order on single line
awk 'BEGIN {
FS="=";
OFS="," # tabs, etc if wanted, delete for spaces.
}
/vcpu_count/ {cpu=$2}
/memory/ {mem=$2}
END { print cpu, mem }'
This gives
10,23.59
I have a data output something like this captured in a file.
List item1
attrib1: someval11
attrib2: someval12
attrib3: someval13
attrib4: someval14
List item2
attrib1: someval21
attrib2: someval12
attrib4: someval24
attrib3: someval23
List item3
attrib1: someval31
attrib2: someval32
attrib3: someval33
attrib4: someval34
I want to extract attrib1, attrib3, attrib4 from the list of data only if "attrib2 is someval12".
note that attrib3 and attrib4 could be in any order after attrib2.
so far I tried to use grep with -A and -B option but I need to specify line number and that is sort of hardcoding which I don't want to do it.
grep -B 1 -A 1 -A 2 "attrib2: someval12" | egrep -w "attrib1|attrib3|attrib4"
can i use any other option of grep which doesn't involve specifying the before and after occurence for this example?
Grep and other tools (like join, sort, uniq) work on the principle "one record per line". It is therefore possible to use a 3-step pipe:
Convert each list item to a single line, using sed.
Do the filtering, using grep.
Convert back to the original format, using sed.
First you need to pick a character that is known not to occur in the input, and use it as separator character. For example, '|'.
Then, find the sed command for step 1, which transforms the input to the format
List item1|attrib1: someval11|attrib2: someval12|attrib3: someval13|attrib4: someval14|
List item2|attrib1: someval21|attrib2: someval12|attrib4: someval24|attrib3: someval23|
List item3|attrib1: someval31|attrib2: someval32|attrib3: someval33|attrib4: someval34|
Now step 2 is easy.
I am writing a curl bash script to test webservices. I will have file_1 which would contain the URL paths
/path/to/url/1/{dynamic_path}.xml
/path/to/url/2/list.xml?{query_param}
Since the values in between {} is dynamic, I am creating a separate file, which will have values for these params. the input would be in key-value pair i.e.,
dynamic_path=123
query_param=shipment
By combining two files, the input should become
/path/to/url/1/123.xml
/path/to/url/2/list.xml?shipment
This is the background of my problem. Now my questions
I am doing it in bash script, and the approach I am using is first reading the file with parameters and parse it based on '=' and store it in key/value pair. so it will be easy to replace i.e., for each url I will find the substring between {} and whatever the text it comes with, I will use it as the key to fetch the value from the array
My approach sounds okay (at least to me) BUT, I just realized that
declare -A input_map is only supported in bashscript higher than 4.0. Now, I am not 100% sure what would be the target environment for my script, since it could run in multiple department.
Is there anything better you could suggest ? Any other approach ? Any other design ?
P.S:
This is the first time i am working on bash script.
Here's a risky way to do it: Assuming the values are in a file named "values"
. values
eval "$( sed 's/^/echo "/; s/{/${/; s/$/"/' file_1 )"
Basically, stick a dollar sign in front of the braces and transform each line into an echo statement.
More effort, with awk:
awk '
NR==FNR {split($0, a, /=/); v[a[1]]=a[2]; next}
(i=index($0, "{")) && (j=index($0,"}")) {
key=substr($0,i+1, j-i-1)
print substr($0, 1, i-1) v[key] substr($0, j+1)
}
' values file_1
There are many ways to do this. You seem to think of putting all inputs in a hashmap, and then iterate over that hashmap. In shell scripting it's more common and practical to process things as a stream using pipelines.
For example, your inputs could be in a csv file:
123,shipment
345,order
Then you could process this file like this:
while IFS=, read path param; do
sed -e "s/{dynamic_path}/$path/" -e "s/{query_param}/$param/" file_1
done < input.csv
The output will be:
/path/to/url/1/123.xml
/path/to/url/2/list.xml?shipment
/path/to/url/1/345.xml
/path/to/url/2/list.xml?order
But this is just an example, there can be so many other ways.
You should definitely start by writing a proof of concept and test it on your deployment server. This example should work in old versions of bash too.
I have as simple function in a bash script and I would like to pipe stdout to it as an input.
jc_hms(){
printf "$1"
}
I'd like to use it in this manner.
var=`echo "teststring" | jc_hms`
Of course I used redundant functions echo and printf to simplify the question, but you get the idea. Right now I get a "not found" error, which I assume means my parameter delimiting is wrong (the "$1" part). Any suggestions?
Originally the jc_hms function was used like this:
echo `jc_hms "teststring"` > //dev/tts/0
but I'd like to store the results in a variable for further processing first, before sending it to the serial port.
EDIT:
So to clarify, I am NOT trying to print stuff to the serial port, I'd like to interface to my bash functions should the "|" pipe character, and I am wondering if this is possible.
EDIT: Alright, here's the full function.
jc_hms(){
hr=$(($1 / 3600))
min=$(($1 / 60))
sec=$(($1 % 60))
printf "$hs:%02d:%02d" $min $sec
}
I'm using the function to form a string which come this line of code
songplaytime=`echo $songtime | awk '{print S1 }'`
printstring="`jc_hms $songplaytime`" #store resulting string in printstring
Where $songtime is a string expressed as "playtime totaltime" delimited by a space.
I wish I can just do this in one line, and pipe it after the awk
printstring=`echo $songtime | awk '{print S1 }' | jc_hms`
like so.
To answer your actual question, when a shell function is on the receiving end of a pipe, standard input is inherited by all commands in the function, but only commands that actually read form their standard input consume any data. For commands that run one after the other, later commands can only see what isn't consumed by previous commands. When two commands run in parallel, which commands see which data depends on how the OS schedules the commands.
Since printf is the first and only command in your function, standard input is effectively ignored. There are several ways around that, including using the read built-in to read standard input into a variable which can be passed to printf:
jc_hms () {
read foo
hr=$(($foo / 3600))
min=$(($foo / 60))
sec=$(($foo % 60))
printf "%d:%02d:%02d" "$hr" "$min" "$sec"
}
However, since your need for a pipeline seems to depend on your perceived need to use awk, let me suggest the following alternative:
printstring=$( jc_hms $songtime )
Since songtime consists of a space-separated pair of numbers, the shell performs word-splitting on the value of songtime, and jc_hms sees two separate parameters. This requires no change in the definition of jc_hms, and no need to pipe anything into it via standard input.
If you still have a different reason for jc_hms to read standard input, please let us know.
You can't pipe stuff directly to a bash function like that, however you can use read to pull it in instead:
jc_hms() {
while read -r data; do
printf "%s" "$data"
done
}
should be what you want
1) I know this is a pretty old post
2) I like most of the answers here
However, I found this post because I needed to something similar. While everyone agrees stdin is what needs to be used, what the answers here are missing is the actual usage of the /dev/stdin file.
Using the read builtin forces this function to be used with piped input, so it can no longer be used in a typical way. I think utilizing /dev/stdin is a superior way of solving this problem, so I wanted to add my 2 cents for completeness.
My solution:
jc_hms() {
declare -i i=${1:-$(</dev/stdin)};
declare hr=$(($i/3600)) min=$(($i/60%60)) sec=$(($i%60));
printf "%02d:%02d:%02d\n" $hr $min $sec;
}
In action:
user#hostname:pwd$ jc_hms 7800
02:10:00
user#hostname:pwd$ echo 7800 | jc_hms
02:10:00
I hope this may help someone.
Happy hacking!
Or, you can also do it in a simple way.
jc_hms() {
cat
}
Though all answers so far have disregarded the fact that this was not what OP wanted (he stated the function is simplified)
I like user.friendly's answer using the Bash built-in conditional unset substitution syntax.
Here's a slight tweak to make his answer more generic, such as for cases with an indeterminate parameter count:
function myfunc() {
declare MY_INPUT=${*:-$(</dev/stdin)}
for PARAM in $MY_INPUT; do
# do what needs to be done on each input value
done
}
Hmmmm....
songplaytime=`echo $songtime | awk '{print S1 }'`
printstring="`jc_hms $songplaytime`" #store resulting string in printstring
if you're calling awk anyway, why not use it?
printstring=`TZ=UTC gawk -vT=$songplaytime 'BEGIN{print strftime("%T",T)}'`
I'm assuming you're using Gnu's Awk, which is the best one and also free; this will work in common linux distros which aren't necessarily using the most recent gawk. The most recent versions of gawk will let you specify UTC as a third parameter to the strftime() function.
The proposed solutions require content on stdin or read to be only conditionally called. Otherwise the function will wait for content from the console and require an Enter or Ctrl+D before continuing.
A workaround is to use read with a timeout. e.g. read -t <seconds>
function test ()
{
# ...
# process any parameters
# ...
read -t 0.001 piped
if [[ "${piped:-}" ]]; then
echo $piped
fi
}
Note, -t 0 did not work for me.
You might have to use a different value for the time-out.
Too small a value might result in bugs and a too large time-out delays the script.
seems nothing works, but there are work arounds
mentioned work around xargs ref function
$ FUNCS=$(functions hi); seq 3 | xargs -I{} zsh -c "eval $FUNCS; hi {}"
then this doesn't work either because your function could reference another function. so I ended up writing some function that accepts pipe inputs, like this:
somefunc() {
while read -r data; do
printf "%s" "$data"
done
}
I have some data files from a legacy system that I would like to process using Awk. Each file consists of a list of records. There are several different record types and each record type has a different set of fixed-width fields (there is no field separator character). The first two characters of the record indicate the type, from this you then know which fields should follow. A file might look something like this:
AAField1Field2LongerField3
BBField4Field5Field6VeryVeryLongField7Field8
CCField99
Using Gawk I can set the FIELDWIDTHS, but that applies to the whole file (unless I am missing some way of setting this on a record-by-record basis), or I can set FS to "" and process the file one character at a time, but that's a bit cumbersome.
Is there a good way to extract the fields from such a file using Awk?
Edit: Yes, I could use Perl (or something else). I'm still keen to know whether there is a sensible way of doing it with Awk though.
Hopefully this will lead you in the right direction. Assuming your multi-line records are guaranteed to be terminated by a 'CC' type row you can pre-process your text file using simple if-then logic. I have presumed you require fields1,5 and 7 on one row and a sample awk script would be.
BEGIN {
field1=""
field5=""
field7=""
}
{
record_type = substr($0,1,2)
if (record_type == "AA")
{
field1=substr($0,3,6)
}
else if (record_type == "BB")
{
field5=substr($0,9,6)
field7=substr($0,21,18)
}
else if (record_type == "CC")
{
print field1"|"field5"|"field7
}
}
Create an awk script file called program.awk and pop that code into it. Execute the script using :
awk -f program.awk < my_multi_line_file.txt
You maybe can use two passes:
1step.awk
/^AA/{printf "2 6 6 12" }
/^BB/{printf "2 6 6 6 18 6"}
/^CC/{printf "2 8" }
{printf "\n%s\n", $0}
2step.awk
NR%2 == 1 {FIELDWIDTHS=$0}
NR%2 == 0 {print $2}
And then
awk -f 1step.awk sample | awk -f 2step.awk
You probably need to suppress (or at least ignore) awk's built-in field separation code, and use a program along the lines of:
awk '/^AA/ { manually process record AA out of $0 }
/^BB/ { manually process record BB out of $0 }
/^CC/ { manually process record CC out of $0 }' file ...
The manual processing will be a bit fiddly - I suppose you'll need to use the substr function to extract each field by position, so what I've got as one line per record type will be more like one line per field in each record type, plus the follow-on printing.
I do think you might be better off with Perl and its unpack feature, but awk can handle it too, albeit verbosely.
Could you use Perl and then select an unpack template based on the first two chars of the line?
Better use some fully featured scripting language like perl or ruby.
What about 2 scripts? E.g. 1st script inserts field separators based on the first characters, then the 2nd should process it?
Or first of all define some function in your AWK script, which splits the lines into variables based on the input - I would go this way, for the possible re-usage.