Guile | How to parse a file? - gnu

I am trying to figure out how to manipulate variables of a guile script through a config file, instead of having to edit the source code.
Got a file called test.cfg that contains this:
name = Gareth
my-num = 123
rand-string = Hello, world!
Here is a script named read-file that I have so far:
#!/usr/bin/guile \
-e main -s
!#
(use-modules (ice-9 textual-ports))
(define (read-file file)
(call-with-input-file file
(lambda (port)
(get-string-all port))))
(define get-name
(call-with-input-file "test.cfg"
;; Code to get value of `name` from test.cfg here.
))
(define (main args)
(display (read-file "test.cfg"))
(display (get-name))
(newline))
In the end result, when name is changed in test.cfg, get-name in read-file should return the new value.

Related

How to compare two parts of a line multiple times in multiple files with specific extension?

NOTE BEFORE READING: The following question is described very precisely and that is the reason for the length of a question. If you want to understand the problem, it's better to read the entire thing. Many thanks for all the answers!
I am working on a bash script (.sh file) which will check certain values in every file of a directory. Bash script will be executed in a pre-commit (pre-commit is not a part of the question).
There is a directory that contains multiple .c files in multiple subdirectories. I want to check a part of two lines which are NOT in every .c file but only in some of them. The structure of a file that contains the useful information is as following:
/*
## SYMBOL = some_symbol1
## A2L_TYPE = PARAMETER
.
.
.
#! DEFAULT = some_value1
## END
*/
some_symbol1 = some_value1
/*
## SYMBOL = some_symbol2
## A2L_TYPE = PARAMETER
.
.
.
#! DEFAULT = some_value2
## END
*/
some_symbol2 = some_value2
This kind of structure is automatically generated by another script.
I want to check if some_value1 (in comment) is equal to some_value1 (in variable).
There are hundreds of these variable in each .c file (not necessarily in each .c file).
The main functionality of a script should be:
Check some_value1 in comment and variable and throw an error if they are not the same. Script has to go through EVERY .c file in a directory (bash is in root) and ALL subdirectories to find previously mentioned structure.
Value of variable can be something as 0.06F, where in comment, there is 0.06 (compare only the numbers)
Value of variable can also be an array: { 0.0F, 0.45F, 0.3F } where in the comment, there is [ 0.0, 0.45, 0.3 ] (without F and difference in braces)
To summarize:
I want to build a check script that compares some_value1 (in comment) and some_value1 (in variable) and throw an error if they don't match
Useful information is not in EVERY .c file but only in some of them (don't know which)
Values after #! DEFAULT is a comment where the value of variable is a number (maybe this is not that important?)
between A2L_TYPE and DEFAULT, there can be desired number of unimportant stuff. (still a comment)
What I tried so far is for loop through every .c file and a nested for loop to read every line in each .c file. What I wanted to implement was a grep command inside for loop to check each line if there is a #! DEFAULT pattern and save it to the variable.
Latest code that I tried:
!/bin/bash
shopt -s globstar
for d in */**/*.c
do
while IFS="" read -r p || [ -n "$p" ]
do
grep -P "#! DEFAULT" $d
done < $d
done
This is currently not working because it gives an error that certain grep targets are directories
If any has any questions, I will try to explain it better.
# search for files with extension ".c"
# execute awk on any matches, using '= ' as field separator
find . -type f -name '*.c' -exec awk -F'=[[:space:]]*' '
# check if first three lines match template
( NR==1 && /^\/\*/ ) ||
( NR==2 && /^## SYMBOL = / ) ||
( NR==3 && /^## A2L_TYPE = PARAMETER/ ) { ok++ }
# template mismatch - skip this file
( NR==4 && ok!=3 ) {
printf "%s : ignored\n", FILENAME
nextfile
}
# store first occurrence of some_value1
# note line number where second occurrence expected
/^#! DEFAULT =/ { v[1]=v1=$2; n=NR+3 }
# test second occurrence
NR==n {
v[2]=v2=$2;
# prune everything except numbers and array delimiters
for (s in v) gsub(/[^0-9.,]/,"",v[s]);
# output result
# match exactly or only number list
printf "%s #(%d,%d) : ", FILENAME,n-3,n
if (v1==v2 || v[1]==v[2])
printf "match (%s)==(%s)\n", v1,v2
else
printf "mismatch (%s)!=(%s)\n", v1,v2
# no need to check rest of this file
# elide to check multiple values per file
nextfile
}
' {} +

Simplify debugging output. Specific request for in-place command text replacement in BASH

OK, so, I have a debugging setup a little like this:
DVAR=();
function DBG() {
if [[ $1 == -s ]]; then shift; stackTrace; fi;
if [[ ! -z ${DVAR[#]} ]]; then
for _v in ${!DVAR[#]}; do
echo "${DVAR[$_v]}" >> $LOG;
unset DVAR[$_v];
done;
fi
local tmp=("$*")
[[ ! -z $tmp ]]&&echo "$tmp" >> $LOG||continue;
}
every once in a while I call it either directly or, and I'd like to take this approach more, by repeatedly adding things to the array and calling it later. SPECIFICALLY, I'd like to be using this:
DVAR+="${0##*/}${FUNCNAME[0]}:$LINENO === assorted local variables and stuff here ====="
That first part is quite a mouthful and really clutters up my code. I'd REALLY rather be able to say something like:
DBG === assorted local variables and stuff here=====
I've tried messing around with alias and even eval, all to no. . evail. ahem
Thoughts anyone?
Like #ufopilot said, you should add your line as a new entry in the array with DVAR+=("...."), not overwriting it as a long string by concatenating it to the first element. Here's an explaining it:
Concatenating:
$ DVAR=()
$ DVAR+="foo"
$ DVAR+="bar"
$ declare -p DVAR
declare -a DVAR=([0]="foobar")
$ echo ${DVAR[#]}
foobar
Appending new entry:
$ DVAR=()
$ DVAR+=(foo)
$ DVAR+=(bar)
$ declare -p DVAR
declare -a DVAR=([0]="foo" [1]="bar")
$ echo "${DVAR[#]}"
foo bar
Here's an example of a function I put together for debugging purposes some time. The function is get_stack which will get the function name from whoever called it and the file name of where that calling function exists in, along with the trace so you can see the call history.
File test.sh:
#!/usr/bin/env bash
foo() {
get_stack
echo -e "$stack_trace" # Note the quotation to get indentation
}
get_stack () {
stack_trace=""
local i stack_size=${#FUNCNAME[#]}
local indent=" "
local newline="" # newline only after first line
# Offset to skip get_stack function
for (( i=0; i<$stack_size; i++ )); do
local func="${FUNCNAME[$i]}"
[ x$func = x ] && func=MAIN
local linen="${BASH_LINENO[$(( i - 1 ))]}"
local src="${BASH_SOURCE[$i]}"
[ x"$src" = x ] && src=non_file_source
stack_trace+="${newline}${indent}-> $src [$func]: $linen"
newline="\n"
indent="$indent "
done
}
echo "stack from test.sh"
foo
File test2.sh:
#!/usr/bin/env bash
source test.sh
echo "stack from test2.sh"
foo
Output:
stack from test.sh
-> test.sh [foo]: 4
-> test.sh [source]: 28
-> ./test2.sh [main]: 3
stack from test2.sh
-> test.sh [foo]: 4
-> ./test2.sh [main]: 6
In my script I have a down-right arrow ascii character that looks better than "->" but can't figure out how to get stackoverflow to display it proberly. The ascii is \u21b3.
As you can see in the stack trace, "foo" ran upon sourcing the function, just as it's supposed to do. But now it is clear why it ran and outputed text!
I think this demonstrates well how you can the array FUNCNAME to walk backwards in the call stack. Also BASH_SORUCE is an array itself, with matching indices, however it will display what file the function call came from. Modifying the "get_stack" function to inspect these arrays:
foo() {
get_stack
echo -e "$stack_trace"
echo "${BASH_SOURCE[#]}"
echo "${FUNCNAME[#]}"
}
yields:
stack from test.sh
-> test.sh [foo]: 4
-> test.sh [source]: 30
-> ./test2.sh [main]: 3
test.sh test.sh ./test2.sh
foo source main
stack from test2.sh
-> test.sh [foo]: 4
-> ./test2.sh [main]: 6
test.sh ./test2.sh
foo main
As you can see, the first set of outputs belong to test.sh, which came from the sourcing. This you can see in BASH_SOURCE: "foo source main". The "main" is the main scope, that is the stuff that runs but is not in a function but the main body of the file. In this case, test.sh had the "foo" call which upon sourcing this file will run.
I hope you see how the indices belong to the same call, but yield different info.
Now for your part, you wanted to add only the string and not the whole same-y info again and again. Since I'm not sure you want the stack trace or not I just added the message string to the first calling function instance. I also fixed up some old code here to make it a little better.
New improved function with message:
get_stack () {
local msg="$#"
stack_trace=""
local i stack_size=${#FUNCNAME[#]}
local indent=" "
# Offset to skip get_stack function
for (( i=1; i<$stack_size; i++ )); do
local func="${FUNCNAME[$i]}"
[ x$func = x ] && func=MAIN
local linen="${BASH_LINENO[$(( i - 1 ))]}"
local src="${BASH_SOURCE[$i]}"
[ x"$src" = x ] && src=non_file_source
stack_trace+="${newline:=\n}${indent}-> $src [$func:$linen]${msg:+": $msg"}"
msg=""
indent="$indent "
done
}
Output:
$ ./test2.sh
stack from test.sh
-> test.sh [foo:4]: my message
-> test.sh [source:28]
-> ./test2.sh [main:3]
stack from test2.sh
-> test.sh [foo:4]: my message
-> ./test2.sh [main:6]
Note that ${var:=X} will initialize "var" with "X" if var was uninitialized or set to empty string, ${var:+X} will replace "var" with "X" if var is initialized/set to something that is not the empty string, and finally as bonus ${var:-X} will replace "var" with "X" if var is uninitialized/set empty string. This is variable substitution in bash which is quite handy!
There are some pieces you can cut and past into your function, or if you need a stack based log you can use my function as a base.
Hope this helps you in your endeavors!

How to read a file into a variable and print it using that variable with exact format in shell

I am trying to read to id_rsa file into a variable var( set var=`cat id_rsa`) in tcsh to provide input to a program. But when i echo the variable ( echo "$var")new lines are gone, its a one line file content. So how do i correctly store and print the variable?
Don't use tcsh for this task, getting the output of a command into a variable in verbatim is unnecessarily difficult:
Some workarounds if you have to use tcsh are:
use redirection
% yourtool < id_rsa
Store the variable as base-16 (or something else) encoded stuff, so that it doesn't contain any newline characters that will get mangled by tcsh.
% set hex_contents = `<id_rsa xxd -l 16 -p`
Use a tempfile?
% set tempfile = `mktemp`
% program > tempfile
... later
% <tempfile other-program
I asked a similar question almost a year ago; https://unix.stackexchange.com/questions/284220/tcsh-preserve-newlines-in-command-substitution
In case you're curious this is how you get the verbatim contents (credit Stéphane Chazelas).
set temp = "`(some command; echo .) | paste -d . - /dev/null`"
set var = ""
set nl = '\
'
foreach i ($temp:q)
set var = $var:q$i:r:q$nl:q
end
set var = $var:r:q

How to make R script takes input from pipe and user given parameter

I have the following R script (myscript.r)
#!/usr/bin/env Rscript
dat <- read.table(file('stdin'), sep=" ",header=FALSE)
# do something with dat
# later with user given "param_1"
With that script we can run it the following way;
$ cat data_no_*.txt | ./myscript.r
What I want to do is to make the script takes additional parameter from user:
$ cat data_no_*.txt | ./myscript.r param_1
What should I do to modify the myscript.r to accommodate that?
For very basic usage, have a look at ?commandArgs.
For more complex usage, two popular packages for command-line arguments and options parsing are getopt and optparse. I use them all the time, they get the job done. I also see argparse, argparser, and GetoptLong but have never used them before. One I missed: Dirk recommended that you look at docopt which does seem very nice and easy to use.
Finally, since you seem to be passing arguments via pipes you might find this OpenRead() function useful for generalizing your code and allowing your arguments to be pipes or files.
I wanted to test docopt so putting it all together, your script could look like this:
#!/usr/bin/env Rscript
## Command-line parsing ##
'usage: my_prog.R [-v -m <msg>] <param> <file_arg>
options:
-v verbose
-m <msg> Message' -> doc
library(docopt)
opts <- docopt(doc)
if (opts$v) print(str(opts))
if (!is.null(opts$message)) cat("MESSAGE: ", opts$m)
## File Read ##
OpenRead <- function(arg) {
if (arg %in% c("-", "/dev/stdin")) {
file("stdin", open = "r")
} else if (grepl("^/dev/fd/", arg)) {
fifo(arg, open = "r")
} else {
file(arg, open = "r")
}
}
dat.con <- OpenRead(opts$file_arg)
dat <- read.table(dat.con, sep = " ", header = FALSE)
# do something with dat and opts$param
And you can test running:
echo "1 2 3" | ./test.R -v -m HI param_1 -
or
./test.R -v -m HI param_1 some_file.txt
We built littler to support just that via its r executable.
Have a look at its examples, it may fit your bill.

In emacs: exclude folders from searching IDs "gid"

I have this snippet of code in my dotemacs file that helps me view IDs.
How is it possible to exclude some folders from the ID search?
; gid.el -- run gid using compilation mode.
;(require 'compile)
;(require 'elisp-utils)
;(provide 'gid)
(defvar gid-command "gid" "The command run by the gid function.")
(defun gid (args)
"Run gid, with user-specified ARGS, and collect output in a buffer.
While gid runs asynchronously, you can use the \\[next-error] command to
find the text that gid hits refer to. The command actually run is
defined by the gid-command variable."
(interactive (list
;(read-input (concat "Run " gid-command " (with args): ") ;confirmation
(word-around-point)))
;)
;; Preserve the present compile-command
(let (compile-command
(gid-buffer ;; if gid for each symbol use: compilation-buffer-name-function
(lambda (mode) (concat "*gid " args "*"))))
;; For portability between v18 & v19, use compile rather than compile-internal
(compile (concat gid-command " " args))))
(defun word-around-point ()
"Return the word around the point as a string."
(save-excursion
(if (not (eobp))
(forward-char 1))
(forward-word -1)
(forward-word 1)
(forward-sexp -1)
(let ((beg (point)))
(forward-sexp 1)
(buffer-substring beg (point)))))
Found the solution.
Simply when making the IDs prune any uninteresting folders like this:
mkid --prune X

Resources