AC_SUBST with dynamic variable name - autoconf

I'm trying to create an m4 macro that basically calls AC_CHECK_SIZEOF(type) then uses AC_SUBST to define that variable for substitution.
So given input of:
AX_CHECK_SIZEOF_AND_SUBST(int, 4)
I want all occurances of #SIZEOF_INT# to be replaced with 4
This is what I came up with so far, but obviously doesn't work:
AC_DEFUN([AX_CHECK_SIZEOF_AND_SUBST], [
AC_CHECK_SIZEOF($1, $2)
NAME=$(echo -n "SIZEOF_$1" | tr "a-z" "A-Z" | tr '*' 'P' | tr -c 'A-Z0-9' '_')
echo "NAME=$NAME"
AC_SUBST($NAME, $$NAME)
])

The trouble with what you are trying to do is that AC_CHECK_SIZEOF does not in fact define a variable named SIZEOF_INT. In 2.68, the variable you want is named ac_cv_sizeof_int, but you should not use that as the name is subject to change in later versions. The value is also written into confdefs.h, so another way to grab it is:
AC_PROG_AWK
AC_CHECK_SIZEOF([int])
SIZEOF_INT=$($AWK '/SIZEOF_INT/{print $3}' confdefs.h)
AC_SUBST([SIZEOF_INT])
(reading confdefs.h is also undocumented behavior and subject to change in future versions of autoconf, but is possibly more stable than looking at $ac_cv_sizeof_int. Possibly, less stable, too. ;) YMMV)
To define your macro (please note my comment about the naming convention), you could do:
AC_DEFUN([wrp_CHECK_SIZEOF_AND_SUBST], [
AC_REQUIRE([AC_PROG_AWK])
AC_CHECK_SIZEOF([$1])
m4_toupper(SIZEOF_$1)=$($AWK '
/SIZEOF_[]m4_toupper($1)/{print $[]3}' confdefs.h)
AC_SUBST(m4_toupper(SIZEOF_$1))
])
The version above does not handle int *, but for simplicity I will keep it there rather than replace it with the more general version:
AC_DEFUN([wrp_CHECK_SIZEOF_AND_SUBST], [
AC_REQUIRE([AC_PROG_AWK])
AC_CHECK_SIZEOF([$1])
m4_pushdef([name],SIZEOF_[]m4_toupper(m4_translit($1,[ *],[_p])))
name=$($AWK '/name/{print $[]3}' confdefs.h)
AC_SUBST(name)
m4_popdef([name])
])
Note: I believe the $() notation should be avoided in portable configure scripts, and should be replaced with backticks. However, I find backticks hideous.

Related

Is there a way to list all categories in perluniprops?

perluniprops lists the Unicode properties of the version of Unicode it supports. For Perl 5.32.1, that's Unicode 13.0.0.
You can obtain a list of the characters that match a category using Unicode::Tussle's unichars.
unichars '\p{Close_Punctuation}'
And the help:
$ unichars --help
Usage:
unichars [*options*] *criterion* ...
Each criterion is either a square-bracketed character class, a regex
starting with a backslash, or an arbitrary Perl expression. See the
EXAMPLES section below.
OPTIONS:
Selection Options:
--bmp include the Basic Multilingual Plane (plane 0) [DEFAULT]
--smp include the Supplementary Multilingual Plane (plane 1)
--astral -a include planes above the BMP (planes 1-15)
--unnamed -u include various unnamed characters (see DESCRIPTION)
--locale -l specify the locale used for UCA functions
Display Options:
--category -c include the general category (GC=)
--script -s include the script name (SC=)
--block -b include the block name (BLK=)
--bidi -B include the bidi class (BC=)
--combining -C include the canonical combining class (CCC=)
--numeric -n include the numeric value (NV=)
--casefold -f include the casefold status
--decimal -d include the decimal representation of the code point
Miscellaneous Options:
--version -v print version information and exit
--help -h this message
--man -m full manpage
--debug -d show debugging of criteria and examined code point span
Special Functions:
$_ is the current code point
ord is the current code point's ordinal
NAME is charname::viacode(ord)
NUM is Unicode::UCD::num(ord), not code point number
CF is casefold->{status}
NFD, NFC, NFKD, NFKC, FCD, FCC (normalization)
UCA, UCA1, UCA2, UCA3, UCA4 (binary sort keys)
Singleton, Exclusion, NonStDecomp, Comp_Ex
checkNFD, checkNFC, checkNFKD, checkNFKC, checkFCD, checkFCC
NFD_NO, NFC_NO, NFC_MAYBE, NFKD_NO, NFKC_NO, NFKC_MAYBE
Other than reading the list of categories from the webpage, is there a way to programmatically get all the possible \p{...} categories?
From the comments, I believe you are trying to port a Perl program using \p regex properties to Python. You don't need a list of all categories (whatever that means); you just need to know what Code Points each of the property used by the program matches.
Now, you could get the list of Code Points from the Unicode database. But a much simpler solution is to use Python's regex module instead of the re module. This will give you access to the same Unicode-defined properties that Perl exposes.
The latest version of the regex module even uses Unicode 13.0.0 just like the latest Perl.
Note that the program uses \p{IsAlnum}, a long way of writing \p{Alnum}. \p{Alnum} is not a standard Unicode property, but a Perl extension. It's the union of Unicode properties \p{Alpha} and \p{Nd}. I don't know know if the regex module defines Alnum identically, but it probably does.

Need advice for using nested command subsitution

I'm trying to wrap my head around nested command substitution. I tried nesting backticks but obviously that doesn't work. How would you nest the following without declaring the ${host} variable first?
host=$(hostname|cut -c1-14);for id in `aladmin list|grep ${host}|awk '{print $2}'`;do aladmin delete ${id};done
The command lists all alarms on a server, greps for the first 14 characters of the hostname and then deletes the alarm with the alarm ID found in field 2 by awk.
My question does in no way duplicate the 'hello' in previous post:
How to properly nest Bash backticks
Thanks in advance,
Bjoern
Do everything in awk. There's no need to use the for loop and the grep, etc. There are better ways than this, but as a first approximation, try something like:
aladmin list | awk "/$(hostname | cut -c1-14)/"'{ print "aladmin delete " $2 | "sh"}'

Bash script key/value pair regardless of bash version

I am writing a curl bash script to test webservices. I will have file_1 which would contain the URL paths
/path/to/url/1/{dynamic_path}.xml
/path/to/url/2/list.xml?{query_param}
Since the values in between {} is dynamic, I am creating a separate file, which will have values for these params. the input would be in key-value pair i.e.,
dynamic_path=123
query_param=shipment
By combining two files, the input should become
/path/to/url/1/123.xml
/path/to/url/2/list.xml?shipment
This is the background of my problem. Now my questions
I am doing it in bash script, and the approach I am using is first reading the file with parameters and parse it based on '=' and store it in key/value pair. so it will be easy to replace i.e., for each url I will find the substring between {} and whatever the text it comes with, I will use it as the key to fetch the value from the array
My approach sounds okay (at least to me) BUT, I just realized that
declare -A input_map is only supported in bashscript higher than 4.0. Now, I am not 100% sure what would be the target environment for my script, since it could run in multiple department.
Is there anything better you could suggest ? Any other approach ? Any other design ?
P.S:
This is the first time i am working on bash script.
Here's a risky way to do it: Assuming the values are in a file named "values"
. values
eval "$( sed 's/^/echo "/; s/{/${/; s/$/"/' file_1 )"
Basically, stick a dollar sign in front of the braces and transform each line into an echo statement.
More effort, with awk:
awk '
NR==FNR {split($0, a, /=/); v[a[1]]=a[2]; next}
(i=index($0, "{")) && (j=index($0,"}")) {
key=substr($0,i+1, j-i-1)
print substr($0, 1, i-1) v[key] substr($0, j+1)
}
' values file_1
There are many ways to do this. You seem to think of putting all inputs in a hashmap, and then iterate over that hashmap. In shell scripting it's more common and practical to process things as a stream using pipelines.
For example, your inputs could be in a csv file:
123,shipment
345,order
Then you could process this file like this:
while IFS=, read path param; do
sed -e "s/{dynamic_path}/$path/" -e "s/{query_param}/$param/" file_1
done < input.csv
The output will be:
/path/to/url/1/123.xml
/path/to/url/2/list.xml?shipment
/path/to/url/1/345.xml
/path/to/url/2/list.xml?order
But this is just an example, there can be so many other ways.
You should definitely start by writing a proof of concept and test it on your deployment server. This example should work in old versions of bash too.

Extracting JSON variable using bash

I need to extract the variable from a JSON encoded file and assign it to a variable in Bash.
excerpt...from file.json
"VariableA": "VariableA data",
"VariableB": [
"VariableB1",
"VariableB2",
"VariableB3",
"VariableB3"
],
I've gotten somewhere with this
variableA=$(fgrep -m 1 "VariableA" file.json )
but it returns the whole line. I just want the data
For the VariableB I need to replace the list with comma separated values.
I've looked at awk, sed, grep, regexpressions and really given the learning curve...need to know which one to use, or a better solution.
Thanks for your suggestions...but this is perfect
git://github.com/kristopolous/TickTick.git
You are better off using a JSON parser. There are many listed at http://json.org/ including two for the BASH shell.
http://kmkeen.com/jshon/
https://github.com/dominictarr/JSON.sh
There is powerful command-line JSON tool jq.
Extracting single value is easy:
variableA=$(jq .VariableA file.json)
For comma separated array contents try this
variableB=$(jq '.VariableB | #csv' file.json)
or
variableB=$(jq '.VariableB | .[]' file.json | tr '\n' ',' | head -c-1)
If you're open to using Perl they have a 'open()' function that will pipe a file with the json function 'to_json'. And if you want to extract json you can use the 'from_json' function. You can check it out here:
http://search.cpan.org/~rjbs/perl-5.16.0/lib/open.pm
http://metacpan.org/pod/JSON#to_json
http://metacpan.org/pod/JSON#from_json ( you might also try using decode json as well)

Pipe output to bash function

I have as simple function in a bash script and I would like to pipe stdout to it as an input.
jc_hms(){
printf "$1"
}
I'd like to use it in this manner.
var=`echo "teststring" | jc_hms`
Of course I used redundant functions echo and printf to simplify the question, but you get the idea. Right now I get a "not found" error, which I assume means my parameter delimiting is wrong (the "$1" part). Any suggestions?
Originally the jc_hms function was used like this:
echo `jc_hms "teststring"` > //dev/tts/0
but I'd like to store the results in a variable for further processing first, before sending it to the serial port.
EDIT:
So to clarify, I am NOT trying to print stuff to the serial port, I'd like to interface to my bash functions should the "|" pipe character, and I am wondering if this is possible.
EDIT: Alright, here's the full function.
jc_hms(){
hr=$(($1 / 3600))
min=$(($1 / 60))
sec=$(($1 % 60))
printf "$hs:%02d:%02d" $min $sec
}
I'm using the function to form a string which come this line of code
songplaytime=`echo $songtime | awk '{print S1 }'`
printstring="`jc_hms $songplaytime`" #store resulting string in printstring
Where $songtime is a string expressed as "playtime totaltime" delimited by a space.
I wish I can just do this in one line, and pipe it after the awk
printstring=`echo $songtime | awk '{print S1 }' | jc_hms`
like so.
To answer your actual question, when a shell function is on the receiving end of a pipe, standard input is inherited by all commands in the function, but only commands that actually read form their standard input consume any data. For commands that run one after the other, later commands can only see what isn't consumed by previous commands. When two commands run in parallel, which commands see which data depends on how the OS schedules the commands.
Since printf is the first and only command in your function, standard input is effectively ignored. There are several ways around that, including using the read built-in to read standard input into a variable which can be passed to printf:
jc_hms () {
read foo
hr=$(($foo / 3600))
min=$(($foo / 60))
sec=$(($foo % 60))
printf "%d:%02d:%02d" "$hr" "$min" "$sec"
}
However, since your need for a pipeline seems to depend on your perceived need to use awk, let me suggest the following alternative:
printstring=$( jc_hms $songtime )
Since songtime consists of a space-separated pair of numbers, the shell performs word-splitting on the value of songtime, and jc_hms sees two separate parameters. This requires no change in the definition of jc_hms, and no need to pipe anything into it via standard input.
If you still have a different reason for jc_hms to read standard input, please let us know.
You can't pipe stuff directly to a bash function like that, however you can use read to pull it in instead:
jc_hms() {
while read -r data; do
printf "%s" "$data"
done
}
should be what you want
1) I know this is a pretty old post
2) I like most of the answers here
However, I found this post because I needed to something similar. While everyone agrees stdin is what needs to be used, what the answers here are missing is the actual usage of the /dev/stdin file.
Using the read builtin forces this function to be used with piped input, so it can no longer be used in a typical way. I think utilizing /dev/stdin is a superior way of solving this problem, so I wanted to add my 2 cents for completeness.
My solution:
jc_hms() {
declare -i i=${1:-$(</dev/stdin)};
declare hr=$(($i/3600)) min=$(($i/60%60)) sec=$(($i%60));
printf "%02d:%02d:%02d\n" $hr $min $sec;
}
In action:
user#hostname:pwd$ jc_hms 7800
02:10:00
user#hostname:pwd$ echo 7800 | jc_hms
02:10:00
I hope this may help someone.
Happy hacking!
Or, you can also do it in a simple way.
jc_hms() {
cat
}
Though all answers so far have disregarded the fact that this was not what OP wanted (he stated the function is simplified)
I like user.friendly's answer using the Bash built-in conditional unset substitution syntax.
Here's a slight tweak to make his answer more generic, such as for cases with an indeterminate parameter count:
function myfunc() {
declare MY_INPUT=${*:-$(</dev/stdin)}
for PARAM in $MY_INPUT; do
# do what needs to be done on each input value
done
}
Hmmmm....
songplaytime=`echo $songtime | awk '{print S1 }'`
printstring="`jc_hms $songplaytime`" #store resulting string in printstring
if you're calling awk anyway, why not use it?
printstring=`TZ=UTC gawk -vT=$songplaytime 'BEGIN{print strftime("%T",T)}'`
I'm assuming you're using Gnu's Awk, which is the best one and also free; this will work in common linux distros which aren't necessarily using the most recent gawk. The most recent versions of gawk will let you specify UTC as a third parameter to the strftime() function.
The proposed solutions require content on stdin or read to be only conditionally called. Otherwise the function will wait for content from the console and require an Enter or Ctrl+D before continuing.
A workaround is to use read with a timeout. e.g. read -t <seconds>
function test ()
{
# ...
# process any parameters
# ...
read -t 0.001 piped
if [[ "${piped:-}" ]]; then
echo $piped
fi
}
Note, -t 0 did not work for me.
You might have to use a different value for the time-out.
Too small a value might result in bugs and a too large time-out delays the script.
seems nothing works, but there are work arounds
mentioned work around xargs ref function
$ FUNCS=$(functions hi); seq 3 | xargs -I{} zsh -c "eval $FUNCS; hi {}"
then this doesn't work either because your function could reference another function. so I ended up writing some function that accepts pipe inputs, like this:
somefunc() {
while read -r data; do
printf "%s" "$data"
done
}

Resources