Get triggered by GPIO state change in bash - linux

I have a GPIO pin, that value of which is represented in the sysfs node /sys/class/gpio/gpioXXXX/value) and I want to detect a change to the value of this GPIO pin. According to the sysfs documentation you should use poll(2) or select(2) for this.
However, both poll and message only seems to be available as a system calls and not from bash. Is there some way to use to get triggered by a state change of the GPIO pin functionality from a bash script?
My intention is to not have (semi-)busy waiting or userland polling. I would also like to simply do this from bash without having to dip into another language. I don't plan to stick with bash throughout the project, but I do want to use it for this very first version. Writing a simple C program to be called from bash for just this is a possibility, but before doing that, I would like to know if I'm not missing something.

Yes, you'll need a C or Python helper -- and you might think about abandoning bash for this project entirely.
See this gist for an implementation of such a helper (named "wfi", "watch-for-interrupt", modified from a Raspberry Pi StackExchange question's answer.
That said:
If you want to (semi-)efficiently have a shell script monitor for a GPIO signal change, you'll want to have a C helper that uses poll() and writes to stdout whenever a noteworthy change occurs. Given that, you can then write a shell loop akin to the following:
while IFS= read -r event; do
echo "Processing $event"
done < <(wfi /sys/class/gpio/gpioXXXX/value)
Using process substitution in this way ensures that the startup cost for your monitor-gpio-signal helper is paid only once. Note some caveats:
Particularly if anything inside the body of your loop calls an external command (rather than relying on shell builtins alone), this is still going to be much slower than using a program written in C, Go or even an otherwise-relatively-slow language such as Python.
If the shell script isn't ready to receive a write, that write may block for an indefinite amount of time. A tool such as pv may be useful to add a buffer to your pipeline:
done < <(wfi "name" | pv -q -B 1M)
...for instance, will establish a 1MB buffer.

Related

Is there a way to make a bash script process messages that have been sent to it using the write command

Is there a way to make a bash script process messages that have been sent to it using the "write" command? So for example, if a user wants to activate a feature in my script, could I make it so that they can send the script a command using the write command?
One possible method I thought of was to configure logging for a screen session and then have the bash script parse text through there, but I'm not sure if there would be a simpler or more efficient way to tackle this
EDIT: I was thinking as an alternative solution I could use a named pipe. I'm worried that it would break though if the tmp partition gets filled up completely (not sure if this would impact write as well?). I'm going to be running this script on a shared box, and every once in a while someone will completely fill up the /tmp partition and then just leave it like that until people start complaining
Hmm, you are trying to really circumvent a poor unix command to ask it something it was not specified for. From the man page (emphasize mine):
The write utility allows you to communicate with other users, by copying
lines from your terminal to theirs
That means that write is intended to copy line directly on terminals. As soon as you say, I will dump terminal output with screen, and then parse the dump file, you loose the simplicity of write (and also need disk space, with the problem of removing old lines from a sequencial file)
Worse, as your script lives on its own, it could (should?) be a daemon script attached to no terminal
So if I have correctly understood your question, your requirements are:
a script that does some tasks and should be able to respond to asynchronous requests - common usages are named pipes or network or unix domain sockets, less common are files in a dedicated folder with a optional signal to have immediate processing, adding lines to a sequential file while being possible is uncommon, because of a synchonization of access problem
a simple and convivial way for users to pass requests. Ok write is nice for that part, but much too hard to interface IMHO
If you do not want to waste time on that part by using standard tools, I would recommend the mail system. It is trivial to alias a mail address to a program that will be called with the mail message as input. But I am not sure it is worth it, because the user could directly call the program with the request as input or command line parameter.
So the client part could be simply a program that:
create a temporary file in a dedicated folder (mkstemp is your friend in C or C++, or mktemp in shell - but beware of race conditions)
write the request to that file
optionaly send a signal to a pid - provided the script write its own PID on startup to a dedicated file

how to avoid deadlock with parallel named pipes?

I am working on a flow-based programming system called net2sh. It is currently based on shell tools connected by named pipes. Several processes work together to get the job done, communicating over named pipes, not unlike a production line in a factory.
In general it is working delightfully well, however there is one major problem with it. In the case where processes are communicating over two or more named pipes, the "sending" process and "receiving" process must open the pipes in the same order. This is because when a process opens a named pipe, it blocks until the other end has also been opened.
I want a way to avoid this, without spawning extra "helper" processes for each pipe, without having to hack existing components, and without having to mess with the program networks to avoid this problem.
Ideally I am looking for some "non-blocking fifo" option, where "open" on a fifo always succeeds immediately but subsequent operations may block if the pipe buffer is full (or empty, for read)... I'd even consider using a kernel patch to that effect. According to fifo(7) O_NONBLOCK does do something different when opening fifos, not what I want exactly, and in order to use that I would have to rewrite every existing shell tool such as cat.
Here is a minimal example which deadlocks:
mkfifo a b
(> a; > b; ) &
(< b; < a; ) &
wait
If you can help me to solve this sensibly I will be very grateful!
There is a good description of using O_NONBLOCK with named pipes here: How do I perform a non-blocking fopen on a named pipe (mkfifo)?
It sounds like you want it to work in your entire environment without changing any C code. Therefore, one approach would be to set LD_PRELOAD to some shared library which contains a wrapper for open(2) which adds O_NONBLOCK to the flags whenever pathname refers to a named pipe.
A concise example of using LD_PRELOAD to override a library function is here: https://www.technovelty.org/c/using-ld_preload-to-override-a-function.html
Whether this actually works in practice without breaking anything else, you'll have to find out for yourself (please let us know!).

How to create a virtual command-backed file in Linux?

What is the most straightforward way to create a "virtual" file in Linux, that would allow the read operation on it, always returning the output of some particular command (run everytime the file is being read from)? So, every read operation would cause an execution of a command, catching its output and passing it as a "content" of the file.
There is no way to create such so called "virtual file". On the other hand, you would be
able to achieve this behaviour by implementing simple synthetic filesystem in userspace via FUSE. Moreover you don't have to use c, there
are bindings even for scripting languages such as python.
Edit: And chances are that something like this already exists: see for example scriptfs.
This is a great answer I copied below.
Basically, named pipes let you do this in scripting, and Fuse let's you do it easily in Python.
You may be looking for a named pipe.
mkfifo f
{
echo 'V cebqhpr bhgchg.'
sleep 2
echo 'Urer vf zber bhgchg.'
} >f
rot13 < f
Writing to the pipe doesn't start the listening program. If you want to process input in a loop, you need to keep a listening program running.
while true; do rot13 <f >decoded-output-$(date +%s.%N); done
Note that all data written to the pipe is merged, even if there are multiple processes writing. If multiple processes are reading, only one gets the data. So a pipe may not be suitable for concurrent situations.
A named socket can handle concurrent connections, but this is beyond the capabilities for basic shell scripts.
At the most complex end of the scale are custom filesystems, which lets you design and mount a filesystem where each open, write, etc., triggers a function in a program. The minimum investment is tens of lines of nontrivial coding, for example in Python. If you only want to execute commands when reading files, you can use scriptfs or fuseflt.
No one mentioned this but if you can choose the path to the file you can use the standard input /dev/stdin.
Everytime the cat program runs, it ends up reading the output of the program writing to the pipe which is simply echo my input here:
for i in 1 2 3; do
echo my input | cat /dev/stdin
done
outputs:
my input
my input
my input
I'm afraid this is not easily possible. When a process reads from a file, it uses system calls like open, fstat, read. You would need to intercept these calls and output something different from what they would return. This would require writing some sort of kernel module, and even then it may turn out to be impossible.
However, if you simply need to trigger something whenever a certain file is accessed, you could play with inotifywait:
#!/bin/bash
while inotifywait -qq -e access /path/to/file; do
echo "$(date +%s)" >> /tmp/access.txt
done
Run this as a background process, and you will get an entry in /tmp/access.txt each time your file is being read.

In Linux, I'm looking for a way for one process to signal another, with blocking

I'm looking for a simple event notification system:
Process A blocks until it gets notified by...
Process B, which triggers Process A.
If I was doing this in Win32 I'd likely use event objects ('A' blocks, when 'B' does a SetEvent).
I need something pretty quick and dirty (prefer script rather than C code).
What sort of things would you suggest? I'm wondering about file advisory locks but it seems messy. One of the processes has to actively open the file in order to hold a lock.
Quick and dirty?
Then use fifo. It is a named pipe. The process A read from the fifo's FD with blocking mode. The process B writes to it when needed.
Simple, indeed.
And here is the bash scripting implementation:
Program A:
#!/bin/bash
mkfifo /tmp/event
while read -n 1 </tmp/event; do
echo "got message";
done
Program B:
#!/bin/bash
echo -n "G" >>/tmp/event
First start script A, then in another shell window repeatedly start script B.
Other than fifo you can use signal and kill to essentially do interrupts and have one process sleep until it receives a signal like SIGUSR1 which then unblocks it (you can use condition variables to achieve this easily without polling).
Slow and clean?
Then use (named) semaphores: either POSIX or SysV (not recommended, but possibly slightly more portable). Process A does a sem_wait (or sem_timedwait) and Process B calls sem_post.

Controlling multiple background process from a shell on an embedded Linux

Currently I am working with a embedded system that has the Linux OS. I need to run multiple application at the same time, and I would like them to be able to run through one script. A fellow colleague already had implemented this by using a wrapper script and return codes.
wrapperScript.sh $command & > output_log.txt
wrapperScript.sh $command2 & >output_log2.txt
But the problem arises in when exiting the application. Normally all the application that are on the embedded system require a user to press q to exit. But the wrapper script rather than doing that when it gets the kill signal or user signal, it just kill the process. This is dangerous because the wrapper script assumes that the application has the proper facilities to deal with the kill signal (that is not always the case and leads to memory leaks and unwanted socket connections). I have looked into automating programs such as expect but since I am using an embedded board, I am unable to get expect for it. Is there a way in the bash shell or embedded C to deal with multiple process have one single program automatically send the q signal to the programs.
I also would like the capability to maintain log and the output of the files.
EDIT:
Solution:
Okay I found the issue to the problem, Expect is the way to go about it in any situation. There is a serious limitation that it might slower, but the trade off is not bad in this situation. I decided to use Expect Scripting Language to implement the solution. There are certain trade off.
Pros:
* Precise control over embedded application
* Can Make Process Interactive to User
* can Deal with Multiple Process
Cons:
* Performance is slow
Use a pipe
Make the command read input from a named pipe. You'll then be able to send it commands from anywhere.
mkfifo command1.ctrl
{ "$command1" <command1.ctrl >command1.log 2>&1;
rm command1.ctrl; } &
Use screen
Run your applications inside the Screen program. You can run all your commands in separate windows in a single instance of screen (you'll save a little memory that way). You can specify the commands to run from a Screen configuration file:
sessionname mycommands
screen -t command1 command1
screen -t command2 command2
To terminate a program, use
screen -S mycommands -p 1 -X stuff 'q
'
where 1 is the number of the window to send the input to (each screen clause in the configuration file starts a window). The text after stuff is input to send to the program; note the presence of a newline after the q (some applications may require a carriage return instead; you can get one with stuff "q$(printf \\015)" if your shell isn't too featured-starved). If your command expects a q with no newline at all, just stuff q.
For logging, you can use Screen's logging feature, or redirect the output to a file as before.

Resources