Notify when gpio value is changed - linux

I'm currently trying to poll gpio value with only shell script.
I basically developped the script with a test file before using /sys/class/gpio/gpioxx/value
This is the solution I found :
#!/bin/bash
SCRIPT_DIR=$(dirname $(readlink -f $0))
FILE_NAME=$SCRIPT_DIR"/fileTest"
while true
do
inotifywait -qq -e modify $FILE_NAME
read val < $FILE_NAME
echo $val
### do something here ###
done
This is working with a basic file but I have two problems with this solution.
1 - The "modify" event is triggered when the file is saved, not when the content of the file has changed. So if I'm writing the same value in the file, the event is triggered but it should not.
2 - I remarqued that this solution doesn't works for gpios, if I'm using a simple ascii file it works but when I use inotifywait on /sys/class/gpio/gpioxx/value it depends.
If I use echo value > /sys/class/gpio/gpioxx/value the event is detected, but if I configure the pin as an input and connect it to 3v3 or 0V nothing is triggered.
Does someone know how I could trigger this change using only scripts?

From linux/Documentation/gpio/gpio-legacy.txt:
"/sys/class/gpio/gpioN/edge"
... reads as either "none", "rising", "falling", or
"both". Write these strings to select the signal edge(s)
that will make poll(2) on the "value" file return.
So you can do:
echo input > /sys/class/gpio/gpioN/direction
echo both > /sys/class/gpio/gpioN/edge
Now, you have to find a command that call poll (or pselect) on /sys/class/gpio/gpioN/value. (I will update my answer if I find one)

This is tight loop solution (which is more resource intensive), but will do the trick if you got nothing better:
gpio_value=$(cat /sys/class/gpio/gpio82/value)
while true; do
value=$(cat /sys/class/gpio/gpio82/value)
if [[ $gpio_value != $value ]]; then
gpio_value=$value
echo "$(date +'%T.%N') value changed to $gpio_value"
fi
done
Example output:
13:09:52.527811324 value changed to 1
13:09:52.775153524 value changed to 0
13:09:55.439330380 value changed to 1
13:09:55.711569164 value changed to 0
13:09:56.211028463 value changed to 1
13:09:57.082968491 value changed to 0
I use it for debug purposes.
Actually, I usually use this one-liner more often:
printf " Press any key to stop...\n GPIO value: " ; until $(read -r -t 0 -n 1 -s key); do printf "\033[2D$(cat /sys/class/gpio/gpio82/value) " ; done ; echo
Again, for debug purposes.

You may use libgpiod that provide some useful tools to monitor GPIOs. However, you need to use new GPIO API available from Linux 4.8.

Related

In Bash, how do I stream the end of a pipeline into a variable?

I know that any variable set at the end of a pipeline is lost (excluding with the shell option in Bash 4 - unfortunately this needs to be a portable solution). However, I am sure there must be a way with file descriptors or something to stream the output of the end of a pipeline into a variable, even if via some convoluted route! :)
Ideally I want a command/function that takes one argument, the name of a variable that will eventually result in containing the output of the rest of the pipeline.
I have got a function that will find its next free file descriptor in a portable fashion:
getFd ()
{
# we'll start with 3 since 0..2 are mapped to standard in, out, and error respectively
local myFD='3'
# we'll get the upperbound from bash's ulimit
local FD_MAX=$( ulimit -n )
local FD_LOC
if [ -e /proc/$$/fd ]
then
FD_LOC="/proc/$$/fd"
elif [ -e /dev/fd ]
then
FD_LOC="/dev/fd"
else
return 1
fi
while [ -e "${FD_LOC}/${myFD}" ] && [ "${myFD}" -le "${FD_MAX}" ]
do
((++myFD))
done
eval FD="${myFD}"
}
I am thinking I might need to do something like previously creating a pool of open file descriptors that can be pulled in via some alias jiggery pokey or something, but am hoping that I am missing some much simpler way as am sure there must be a better way.
I was also thinking that if I added printf %s "${myFD}" at the end I could do something like alias '{FD}'="$( getFd )" to implement the Bash 4 feature of automatically finding the next available file descriptor for use in the form of {FD} <filename note: the need to have a space, but if this can be made to work it would be great to bring this feature to bash 3.0 for example. Also, would probably have to use shopt -s expand_aliases.
Any ideas would be greatly appreciated!
P.S. I am trying to avoid having to force the MyCommand MyVariable < <( command1 | command2 ; ) ; type syntax and and am striving if it is possible to end with the: $ command1 | command2 | MyCommand MyVariable ; type of use.
Your question is a bit confusing, but I guess you want to set a user-defined variable name to the 1st available file descriptor.
If that's the case, your function should simply echo the fd number:
getFD() {
...
echo $myFD
}
And then let's say foo contains the variable name to store the fd into, let's assume it contains the string bar:
eval $foo=`getFD`
After that variable bar will contain the first available fd.

Linux Script to execute something when F1 is hitter

I have this script start.sh
#!/bin/bash
while[1]
do
read -sn3 key
if [$key=="\033[[A"]
then
./test1
else
./test2
fi
done
I want to set up a forever loop check see if F1 key pressed. If pressed execute test1 else test2. I did start.sh & running in background so other programs can run.
I got error
while [1] command not found
syntax error near unexpected token 'do'
[f==\033]: command not found
Also where is this read command located? I type which read, it didn't find it.
Also, if try ./start.sh & it gives totally different behavior. I enter a key and it says that key is not found. I though & just run the script at background
There are several basic syntax problems in your code (consider using shellcheck before posting to clean up these things), but the approach itself is flawed. Hitting "q" and "F1" produces different length inputs.
Here's a script relying on the fact that escape sequences all come in the same read call, which is dirty but effective:
#!/bin/bash
readkey() {
local key settings
settings=$(stty -g) # save terminal settings
stty -icanon -echo min 0 # disable buffering/echo, allow read to poll
dd count=1 > /dev/null 2>&1 # Throw away anything currently in the buffer
stty min 1 # Don't allow read to poll anymore
key=$(dd count=1 2> /dev/null) # do a single read(2) call
stty "$settings" # restore terminal settings
printf "%s" "$key"
}
# Get the F1 key sequence from termcap, fall back on Linux console
# TERM has to be set correctly for this to work.
f1=$(tput kf1) || f1=$'\033[[A'
while true
do
echo "Hit F1 to party, or any other key to continue"
key=$(readkey)
if [[ $key == "$f1" ]]
then
echo "Party!"
else
echo "Continuing..."
fi
done
Should be
while :
or
while true
Try this:
#!/bin/bash
while true
do
read -sn3 key
if [ "$key" = "$(tput kf1)" ]
then
./test1
else
./test2
fi
done
It is more robust to use tput to generate the control sequence, you can see a full list in man terminfo. If tput isn't available, you can use $'\eOP' for most terminal emulators or $'\e[[A' for the Linux console (the $ is necessary with the string to make bash interpret escape sequences).
read is a bash builtin command - try help read.

Bash script to capture input, run commands, and print to file

I am trying to do a homework assignment and it is very confusing. I am not sure if the professor's example is in Perl or bash, since it has no header. Basically, I just need help with the meat of the problem: capturing the input and outputting it. Here is the assignment:
In the session, provide a command prompt that includes the working directory, e.g.,
$./logger/home/it244/it244/hw8$
Accept user’s commands, execute them, and display the output on the screen.
During the session, create a temporary file “PID.cmd” (PID is the process ID) to store the command history in the following format (index: command):
1: ls
2: ls -l
If the script is aborted by CTRL+C (signal 2), output a message “aborted by ctrl+c”.
When you quit the logging session (either by “exit” or CTRL+C),
a. Delete the temporary file
b. Print out the total number of the commands in the session and the numbers of successful/failed commands (according to the exit status).
Here is my code so far (which did not go well, I would not try to run it):
#!/bin/sh
trap 'exit 1' 2
trap 'ctrl-c' 2
echo $(pwd)
while true
do
read -p command
echo "$command:" $command >> PID.cmd
done
Currently when I run this script I get
command read: 10: arg count
What is causing that?
======UPDATE=========
Ok I made some progress not quite working all the way it doesnt like my bashtrap or incremental index
#!/bin/sh
index=0
trap bashtrap INT
bashtrap(){
echo "CTRL+C aborting bash script"
}
echo "starting to log"
while :
do
read -p "command:" inputline
if [ $inputline="exit" ]
then
echo "Aborting with Exit"
break
else
echo "$index: $inputline" > output
$inputline 2>&1 | tee output
(( index++ ))
fi
done
This can be achieved in bash or perl or others.
Some hints to get you started in bash :
question 1 : command prompt /logger/home/it244/it244/hw8
1) make sure of the prompt format in the user .bashrc setup file: see PS1 data for debian-like distros.
2) cd into that directory within you bash script.
question 2 : run the user command
1) get the user input
read -p "command : " input_cmd
2) run the user command to STDOUT
bash -c "$input_cmd"
3) Track the user input command exit code
echo $?
Should exit with "0" if everything worked fine (you can also find exit codes in the command man pages).
3) Track the command PID if the exit code is Ok
echo $$ >> /tmp/pid_Ok
But take care the question is to keep the user command input, not the PID itself as shown here.
4) trap on exit
see man trap as you misunderstood the use of this : you may create a function called on the catched exit or CTRL/C signals.
5) increment the index in your while loop (on the exit code condition)
index=0
while ...
do
...
((index++))
done
I guess you have enough to start your home work.
Since the example posted used sh, I'll use that in my reply. You need to break down each requirement into its specific lines of supporting code. For example, in order to "provide a command prompt that includes the working directory" you need to actually print the current working directory as the prompt string for the read command, not by setting the $PS variable. This leads to a read command that looks like:
read -p "`pwd -P`\$ " _command
(I use leading underscores for private variables - just a matter of style.)
Similarly, the requirement to do several things on either a trap or a normal exit suggests a function should be created which could then either be called by the trap or to exit the loop based on user input. If you wanted to pretty-print the exit message, you might also wrap it in echo commands and it might look like this:
_cleanup() {
rm -f $_LOG
echo
echo $0 ended with $_success successful commands and $_fail unsuccessful commands.
echo
exit 0
}
So after analyzing each of the requirements, you'd need a few counters and a little bit of glue code such as a while loop to wrap them in. The result might look like this:
#/usr/bin/sh
# Define a function to call on exit
_cleanup() {
# Remove the log file as per specification #5a
rm -f $_LOG
# Display success/fail counts as per specification #5b
echo
echo $0 ended with $_success successful commands and $_fail unsuccessful commands.
echo
exit 0
}
# Where are we? Get absolute path of $0
_abs_path=$( cd -P -- "$(dirname -- "$(command -v -- "$0")")" && pwd -P )
# Set the log file name based on the path & PID
# Keep this constant so the log file doesn't wander
# around with the user if they enter a cd command
_LOG=${_abs_path}/$$.cmd
# Print ctrl+c msg per specification #4
# Then run the cleanup function
trap "echo aborted by ctrl+c;_cleanup" 2
# Initialize counters
_line=0
_fail=0
_success=0
while true
do
# Count lines to support required logging format per specification #3
((_line++))
# Set prompt per specification #1 and read command
read -p "`pwd -P`\$ " _command
# Echo command to log file as per specification #3
echo "$_line: $_command" >>$_LOG
# Arrange to exit on user input with value 'exit' as per specification #5
if [[ "$_command" == "exit" ]]
then
_cleanup
fi
# Execute whatever command was entered as per specification #2
eval $_command
# Capture the success/fail counts to support specification #5b
_status=$?
if [ $_status -eq 0 ]
then
((_success++))
else
((_fail++))
fi
done

Multi-threaded BASH programming - generalized method?

Ok, I was running POV-Ray on all the demos, but POV's still single-threaded and wouldn't utilize more than one core. So, I started thinking about a solution in BASH.
I wrote a general function that takes a list of commands and runs them in the designated number of sub-shells. This actually works but I don't like the way it handles accessing the next command in a thread-safe multi-process way:
It takes, as an argument, a file with commands (1 per line),
To get the "next" command, each process ("thread") will:
Waits until it can create a lock file, with: ln $CMDFILE $LOCKFILE
Read the command from the file,
Modifies $CMDFILE by removing the first line,
Removes the $LOCKFILE.
Is there a cleaner way to do this? I couldn't get the sub-shells to read a single line from a FIFO correctly.
Incidentally, the point of this is to enhance what I can do on a BASH command line, and not to find non-bash solutions. I tend to perform a lot of complicated tasks from the command line and want another tool in the toolbox.
Meanwhile, here's the function that handles getting the next line from the file. As you can see, it modifies an on-disk file each time it reads/removes a line. That's what seems hackish, but I'm not coming up with anything better, since FIFO's didn't work w/o setvbuf() in bash.
#
# Get/remove the first line from FILE, using LOCK as a semaphore (with
# short sleep for collisions). Returns the text on standard output,
# returns zero on success, non-zero when file is empty.
#
parallel__nextLine()
{
local line rest file=$1 lock=$2
# Wait for lock...
until ln "${file}" "${lock}" 2>/dev/null
do sleep 1
[ -s "${file}" ] || return $?
done
# Open, read one "line" save "rest" back to the file:
exec 3<"$file"
read line <&3 ; rest=$(cat<&3)
exec 3<&-
# After last line, make sure file is empty:
( [ -z "$rest" ] || echo "$rest" ) > "${file}"
# Remove lock and 'return' the line read:
rm -f "${lock}"
[ -n "$line" ] && echo "$line"
}
#adjust these as required
args_per_proc=1 #1 is fine for long running tasks
procs_in_parallel=4
xargs -n$args_per_proc -P$procs_in_parallel povray < list
Note the nproc command coming soon to coreutils will auto determine
the number of available processing units which can then be passed to -P
If you need real thread safety, I would recommend to migrate to a better scripting system.
With python, for example, you can create real threads with safe synchronization using semaphores/queues.
sorry to bump this after so long, but I pieced together a fairly good solution for this IMO
It doesnt work perfectly, but it will limit the script to a certain number of child tasks running, and then wait for all the rest at the end.
#!/bin/bash
pids=()
thread() {
local this
while [ ${#} -gt 6 ]; do
this=${1}
wait "$this"
shift
done
pids=($1 $2 $3 $4 $5 $6)
}
for i in 1 2 3 4 5 6 7 8 9 10
do
sleep 5 &
pids=( ${pids[#]-} $(echo $!) )
thread ${pids[#]}
done
for pid in ${pids[#]}
do
wait "$pid"
done
it seems to work great for what I'm doing (handling parallel uploading of a bunch of files at once) and keeps it from breaking my server, while still making sure all the files get uploaded before it finishes the script
I believe you're actually forking processes here, and not threading. I would recommend looking for threading support in a different scripting language like perl, python, or ruby.

Trace of executed programs called by a Bash script

A script is misbehaving. I need to know who calls that script, and who calls the calling script, and so on, only by modifying the misbehaving script.
This is similar to a stack-trace, but I am not interested in a call stack of function calls within a single bash script.
Instead, I need the chain of executed programs/scripts that is initiated by my script.
A simple script I wrote some days ago...
# FILE : sctrace.sh
# LICENSE : GPL v2.0 (only)
# PURPOSE : print the recursive callers' list for a script
# (sort of a process backtrace)
# USAGE : [in a script] source sctrace.sh
#
# TESTED ON :
# - Linux, x86 32-bit, Bash 3.2.39(1)-release
# REFERENCES:
# [1]: http://tldp.org/LDP/abs/html/internalvariables.html#PROCCID
# [2]: http://linux.die.net/man/5/proc
# [3]: http://linux.about.com/library/cmd/blcmdl1_tac.htm
#! /bin/bash
TRACE=""
CP=$$ # PID of the script itself [1]
while true # safe because "all starts with init..."
do
CMDLINE=$(cat /proc/$CP/cmdline)
PP=$(grep PPid /proc/$CP/status | awk '{ print $2; }') # [2]
TRACE="$TRACE [$CP]:$CMDLINE\n"
if [ "$CP" == "1" ]; then # we reach 'init' [PID 1] => backtrace end
break
fi
CP=$PP
done
echo "Backtrace of '$0'"
echo -en "$TRACE" | tac | grep -n ":" # using tac to "print in reverse" [3]
... and a simple test.
I hope you like it.
You can use Bash Debugger http://bashdb.sourceforge.net/
Or, as mentioned in the previous comments, the caller bash built-in. See: http://wiki.bash-hackers.org/commands/builtin/caller
i=0; while caller $i ;do ((i++)) ;done
Or as a bash function:
dump_stack(){
local i=0
local line_no
local function_name
local file_name
while caller $i ;do ((i++)) ;done | while read line_no function_name file_name;do echo -e "\t$file_name:$line_no\t$function_name" ;done >&2
}
Another way to do it is to change PS4 and enable xtrace:
PS4='+$(date "+%F %T") ${FUNCNAME[0]}() $BASH_SOURCE:${BASH_LINENO[0]}+ '
set -o xtrace # Comment this line to disable tracing.
~$ help caller
caller: caller [EXPR]
Returns the context of the current subroutine call.
Without EXPR, returns "$line $filename". With EXPR,
returns "$line $subroutine $filename"; this extra information
can be used to provide a stack trace.
The value of EXPR indicates how many call frames to go back before the
current one; the top frame is frame 0.
Since you say you can edit the script itself, simply put a:
ps -ef >/tmp/bash_stack_trace.$$
in it, where the problem is occurring.
This will create a number of files in your tmp directory that show the entire process list at the time it happened.
You can then work out which process called which other process by examining this output. This can either be done manually, or automated with something like awk, since the output is regular - you just use those PID and PPID columns to work out the relationships between all the processes you're interested in.
You'll need to keep an eye on the files, since you'll get one per process so they may have to be managed. Since this is something that should only be done during debugging, most of the time that line will be commented out (preceded by #), so the files won't be created.
To clean them up, you can simply do:
rm /tmp/bash_stack_trace.*
UPDATE:
The code below should work. Now I have a newer answer with a newer code version that allows a message inserted in the stacktrace.
IIRC I just couldn't find this answer to update it as well at the time. But now decided code is better kept in git so latest version of the above should be in this gist.
original code-corrected answer below:
There was another answer about this somewhere but here is a function to use for getting stack trace in the sense used for example in the java programming language. You call the function and it puts the stack trace into the variable $STACK. It show the code points that led to get_stack being called. This is mostly useful for complicated execution where single shell sources multiple script snippets and nesting.
function get_stack () {
STACK=""
# to avoid noise we start with 1 to skip get_stack caller
local i
local stack_size=${#FUNCNAME[#]}
for (( i=1; i<$stack_size ; i++ )); do
local func="${FUNCNAME[$i]}"
[ x$func = x ] && func=MAIN
local linen="${BASH_LINENO[(( i - 1 ))]}"
local src="${BASH_SOURCE[$i]}"
[ x"$src" = x ] && src=non_file_source
STACK+=$'\n'" "$func" "$src" "$linen
done
}
adding pstree -p -u `whoami` >>output in your script will probably get you the information you need.
The simplest script which returns a stack trace with all callers:
i=0; while caller $i ;do ((i++)) ;done
You could try something like
strace -f -e execve script.sh

Resources