ImageMagick API for command-line GUI application interface to `display` [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I'd like to, basically, have a quick way to select a box (region of interest) in an image, and get geometry output in ImageMagick's format. I cannot see an easy way to do it with the default ImageMagick display viewer, so I'm looking for some API (and hopefully examples) to allow me to code my own viewer.
A bit of background: In ImageMagick • View topic - selecting a region of interest from command line (2008) it is said you cannot do it, however, there is display: ImageMagick - Region of Interest (2003?) which explains how to do it (but apparently it refers to an older version).
Anyways, this is how things look is you call display -size 300x500 pattern:checkerboard (pattern:checkerboard is built-in pattern image in imagemagick):
Once the "ImageMagick" display window is up, click on it; then Command menu shows - from it, choose /"Image Edit"/"Region of Interest..."; then can click and drag on the viewer display window. And you also get the geometry in upper left corner - but you cannot copy/paste it as text (so I've had to retype).
Also, display in command line mode takes up the terminal (linux - Make imagemagick's display exit at terminal, preserving the window (single instance mode) - Super User) - and I cannot see a way to force it to run in "single instance mode", such that I could issue filenames on the command line, and display would load them in the one and the same currently running instance.
Now, I've found Casting spells with ImageMagick - Image manipulation for programmers (2012), which mentions a MagickWand API; after some searching, I found on the imagemagick site:
ImageMagick: MagickWand, C API for ImageMagick
ImageMagick: MagickCore, Low-level C API for ImageMagick
ImageMagick: PerlMagick, Perl API for ImageMagick
ImageMagick: Magick++, C++ API for ImageMagick
So, my first thought was a script in Python - but apparently there is only a Perl API, which is fine.
However, what I need to code is basically a command line interface, which will start a display -like window process as a "single instance", and exit the terminal while passing parameters such as file name, -density etc to the window; the window would then react on mouse clicks, allowing selection of a crop geometry box (region of interest) - and finally, render the geometry string in a text box, so you can copy it. But as far as I can see, all the APIs are oriented toward performing the functions of the command-line convert.
So my question is - can any of these API's be used to program a display-like GUI; and do there exist any examples of a similar nature (preferably in a scripting language, but I'll live with C/C++) which can be pointed out?
Many thanks in advance for any answers,
Cheers!

Well, this turned out to be a bit of a pain, but I managed to put together a Perl-Tk script with ImageMagick API, that behaves like what I wanted: imgckdis.pl (code also below). Here is a screenshot:
Note that it can pretty much just display an image in hardcoded 400x400 px (although it may extend for bigger images) - there is no menus, no mouse interaction (scrollwheel zoom) - pretty much nothing :) The script only accepts one command-line argument - a file to be opened; but it can also understand ImageMagick specials like "xc:white" (the ImageMagick portion will even automatically render SVG files, as shown on screenshot).
But one thing it is capable of, is working in single instance mode: the first instance started becomes a "master", and draws the Tk window, and locks the respective terminal. Subsequent instances of the script, realizing the master instance is already started, will simply issue a command to the master to load a new image.
This "issuing a command to 'master'" turned out to be not so easy, as the collection of links below shows (as well as the revision notes in the online vesrion). I thought at first, that using interprocess-communication shared variables would allow me to store a "pointer by reference" to the master; and then allow the subsequent instances to call functions on it. Well, it seems that cannot be done - for one, Perl may discourage that - but even if you hop over all those checks, in the end you get a memory address which is not seen as in shared space, and so one cannot retrieve anything from it. Furthermore, the IPC::Shareable Perl package is possibly "guaranteed" only for integers and strings ?!
Nevertheless, the approach that finally worked is, as hinted, to have the "master" poll for changes in changed variables; and non-master instances to simply change this variable when they are called - and this approach seems to work... However, for a "real" application, one would then have to think of organizing quite a few of these shared variables..
Well, maybe one cannot still zoom and reposition the image, and draw a geometry rectangle - but, at least it's an example that can be demonstrated to be working (at least on Ubuntu) :)...
Hope this helps someone,
Cheers!
The code:
#!/usr/bin/perl
# imgckdis.pl
# http://sdaaubckp.svn.sf.net/viewvc/sdaaubckp/single-scripts/imgckdis.pl
use warnings;
use strict;
use Image::Magick; # sudo apt-get install perlmagick # debian/ubuntu
use Tk;
use MIME::Base64;
use Carp;
use Fcntl ':flock';
use Data::Printer;
use Class::Inspector;
use IPC::Shareable;
my $amMaster = 1;
my $file_read;
open my $self, '<', $0 or die "Couldn't open self: $!";
flock $self, LOCK_EX | LOCK_NB or $amMaster = 0;
if ($amMaster == 1) {
print "We are master single instance as per flock\n";
IPC::Shareable->clean_up_all;
}
if (!$ARGV[0]) {
$file_read = "xc:white";
} else {
$file_read = $ARGV[0];
}
chomp $file_read;
my %options = (
create => 1,
exclusive => 0,
mode => 0644,
destroy => 0,
);
my $glue1 = 'dat1';
my $glue2 = 'dat2';
my $refcount;
my $reffname;
my $lastreffname;
my $refcount_handle = tie $refcount, 'IPC::Shareable', $glue1 , \%options ;
if ($amMaster == 1) {
$refcount = undef;
}
my $reffname_handle = tie $reffname, 'IPC::Shareable', $glue2 , \%options ;
if ($amMaster == 1) {
$reffname = undef;
}
my ($image, $blob, $content, $tkimage, $mw);
if ($amMaster == 1) { # if (not(defined($refcount))) {
# initialize the assigns
$lastreffname = "";
$reffname_handle->shlock(LOCK_SH|LOCK_NB);
$reffname = $file_read; #
$reffname_handle->shunlock();
$refcount_handle->shlock(LOCK_SH|LOCK_NB);
$refcount = 1; #
$refcount_handle->shunlock();
}
# mainly from http://objectmix.com/perl/771215-how-display-image-magick-image-tk-canvas.html
sub generateImageContent() {
#fake a PGM then convert it to gif
$image = Image::Magick->new(
size => "400x400",
);
$image->Read($file_read); #("xc:white");
$image->Draw(
primitive => 'line',
points => "300,100 300,500",
stroke => '#600',
);
# set it as PGM
$image->Set(magick=>'pgm');
#your pgm is loaded here, now change it to gif or whatever
$image->Set(magick=>'gif');
$blob = $image->ImageToBlob();
# Tk wants base64encoded images
$content = encode_base64( $blob ) or die $!;
}
sub loadImageContent() {
#fake a PGM then convert it to gif
$image = Image::Magick->new(
size => "400x400",
);
$image->Read($lastreffname); #("xc:red") for test
# set it as PGM
$image->Set(magick=>'pgm');
#your pgm is loaded here, now change it to gif or whatever
$image->Set(magick=>'gif');
$blob = $image->ImageToBlob();
# Tk wants base64encoded images
$content = encode_base64( $blob ) or die $!;
#~ $tkimage->read($content); # expects filename
$tkimage->put($content); # works!
}
sub CleanupExit() {
# only one remove() passes - the second fails: "Couldn't remove shared memory segment/semaphore set"
(tied $refcount)->remove();
IPC::Shareable->clean_up;
$mw->destroy();
print "Exiting appliction!\n";
exit;
}
sub updateVars() {
if ( not($reffname eq $lastreffname) ) {
print "Change: ", $lastreffname, " -> ", $reffname, "\n";
$lastreffname = $reffname;
loadImageContent();
}
}
if ( not($amMaster == 1) ) {
# simply set the shared variable to cmdarg variable
# (master's updateVars should take care of update)
$reffname_handle->shlock(LOCK_SH|LOCK_NB);
$reffname = $file_read;
$reffname_handle->shunlock();
# and exit now - we don't want a second instance
print "Main instance of this script is already running\n";
croak "Loading new file: $file_read";
}
$mw = MainWindow->new();
$mw->protocol(WM_DELETE_WINDOW => sub { CleanupExit(); } );
generateImageContent();
$tkimage = $mw->Photo(-data => $content);
$mw->Label(-image => $tkimage)->pack(-expand => 1, -fill => 'both');
$mw->Button(-text => 'Quit', -command => sub { CleanupExit(); } )->pack;
# polling function for sharable - 100 ms
$mw->repeat(100, \&updateVars);
MainLoop;
__END__
Relevant links:
How to display an Image::Magick image in a Tk::Canvas?
Installing the Perl Image::Magick module on CentOS 5.2 (Fourmilog: None Dare Call It Reason)
perl - How do I install Image::Magick on Debian etch? - Stack Overflow
[magick-users] PerlMagick 6.0.0 Composite -opacity doesn't work
Ensuring only one copy of a perl script is running at a time
Re: Limiting a program to a single running instance - nntp.perl.org
Sys::RunAlone - search.cpan.org
What's the best way to make sure only one instance of a Perl program is running? - Stack Overflow
reinstall PERL - PERL Beginners (Do you need to predeclare croak?)
Image in Perl TK?
Perl Tk::Photo help
introspection - How do I list available methods on a given object or package in Perl? - Stack Overflow
Can't install IPC:Shareable
Share variables between Child processes in perl without IPC::Shareable - Stack Overflow
IPC::Shareable - search.cpan.org
perl - Checking IPC Shareable lock - Stack Overflow
Storing complex data structures using Storable
using tie on two arrays on IPC::Shareable makes array1 and array2 both same even though array2 is not updated.
Dereferencing in perl
Shared Memory using IPC::Shareable - Can't use an undefined value as an ARRAY reference
Re: Handling child process and close window exits in Perl/Tk
How can I convert the stringified version of array reference to actual array reference in Perl? - Stack Overflow
Re: IPC::Shareable Problem with multidimentional hash
perl - IPC::Shareable variables, "Can't use string ... as a SCALAR ref.." and memory address - Stack Overflow
Perl/Tk App and Interprocess Communication
Re: Antw: Re: Perl/Tk + Thread - nntp.perl.org

Related

Suppressing output after SSH to another server

When I SSH to another server thare are some blurbs of text that always outputs when you log in. (wheather its SSH or just logging in to its own session)
"Authentification banner" is what it prints out every time i either scp a file over or SSH into it.
My code iterates thru a list of servers and sends a file, each time it does that it outputs a lot of text id like to suppress.
This code loops thru each server printing out what its doing.
for(my $j=0; $j < $#servName+1; $j++)
{
print "\n\nSending file: $fileToTransfer to \n$servName[$j]:$targetLocation\n\n";
my $sendCommand = `scp $fileToTransfer $servName[$j]:$targetLocation`;
print $sendCommand;
}
But then it comes out like this:
Sending file: /JacobsScripts/AddAlias.pl to
denamap2:/release/jscripts
====================================================
Welcome authorized users. This system is company
property and unauthorized access or use is prohibited
and may subject you to discipline, civil suit or
criminal prosecution. To the extent permitted by law,
system use and information may be monitored, recorded
or disclosed. Using this system constitutes your
consent to do so. You also agree to comply with applicable
company procedures for system use and the protection of
sensitive (including export controlled) data.
====================================================
Sending file: /JacobsScripts/AddAlias.pl to
denfpev1:/release/jscripts
====================================================
Welcome authorized users. This system is company
property and unauthorized access or use is prohibited
and may subject you to discipline, civil suit or
criminal prosecution. To the extent permitted by law,
system use and information may be monitored, recorded
or disclosed. Using this system constitutes your
consent to do so. You also agree to comply with applicable
company procedures for system use and the protection of
sensitive (including export controlled) data.
====================================================
I havent tried much, i saw a few forums that mention taking the output into a file and then delete it but idk if thatll work for my situation.
NOTE   This answer assumes that on the system in question the ssh/scp messages go to STDERR stream (or perhaps even directly to /dev/tty)†, like they do on some systems I test with -- thus the question.
If not, then ikegami's answer of course takes care of it: just don't print the captured STDOUT. But even in that case, I also think that all ways shown here are better for capturing output (except for the one involving the shell), specially when both streams are needed.
These prints can be suppressed by configuring the server, or perhaps via a .hushlogin file, but then that clearly depends on the server management.
Otherwise, yes you can redirect standard streams to files or, better yet, to variables, what makes the overall management easier.
Using IPC::Run
use IPC::Run qw(run);
my ($file, $servName, $targetLocation) = ...
my #cmd = ("scp", $file, $servName, $targetLocation);
run \#cmd, '1>', \my $out, '2>', \my $err;
# Or redirect both to one variable
# run \#cmd, '>&', \my $out_err;
This mighty and rounded library allows great control over the external processes it runs; it provides almost a mini shell.
Or using the far simpler, and very handy Capture::Tiny
use Capture::Tiny qw(capture);
...
my ($out, $err, $exit) = capture { system #cmd };
Here output can be merged using capture_merged. Working with this library is also clearly superior to builtins (qx, system, pipe-open).
In both cases then inspect $out and $err variables, what is far less cut-and-dry as error messages depend on your system. For some errors the library routines die/croak but for some others they don't but merely print to STDERR. It is probably more reliable to use other tools that libraries provide for detecting errors.
The ssh/scp "normal" (non-error) messages may print to either STDERR or STDOUT stream, or may even go directly to /dev/tty,† so can be mixed with error messages.
Given that the intent seems to be to intersperse these scp commands with other prints then I'd recommend either of these two ways over the others below.
Another option, which I consider least satisfactory overall, is to use the shell to redirect output in the command itself, either to separate files
my ($out_file, $err_file) = ...
system("#cmd 2> $err_file 1> $out_file" ) == 0
or die "system(#cmd...) error: $?"; # see "system" in perldoc
or, perhaps for convenience, both streams can go to one file
system("#cmd > $out_err_file 2>&1" ) == 0 or die $?;
Then inspect files for errors and remove if there is nothing remarkable. Or, shell redirections can be used like in the question but to capture all output
my $out_and_err = qx(#cmd 2>&1);
Then examine the (possibly multiline) variable for errors.
Or, instead of dealing with individual commands we can redirect streams themselves to files for a duration of a larger part of the program
use warnings;
use strict;
use feature 'say';
# Save filehandles ('dup' them) so to be able to reopen later
open my $saveout, ">&STDOUT" or die "Can't dup STDOUT: $!";
open my $saveerr, ">&STDERR" or die "Can't dup STDERR: $!";#]]
my ($outf, $errf) = qw(stdout.txt stderr.txt);
open *STDOUT, ">", $outf or die "Can't redirect STDOUT to $outf: $!";
open *STDERR, ">", $errf or die "Can't redirect STDERR to $errf: $!";
my ($file, $servName, $targetLocation) = ...
my #cmd = ("scp", $file, $servName, $targetLocation);
system(#cmd) == 0
or die "system(#cmd) error: $?"; # see "system" in perldoc
# Restore standard streams when needed for normal output
open STDOUT, '>&', $saveout or die "Can't reopen STDOUT: $!";
open STDERR, '>&', $saveerr or die "Can't reopen STDERR: $!";
# Examine what's in the files (errors?)
I use system instead of qx (operator form of backticks) since there is no need for output from scp. Most of this is covered in open, and search SO for specifics.
It'd be nice to be able to reopen streams to variables but that doesn't work here
† This is even prescribed ("allowed") by POSIX
/dev/tty
In each process, a synonym for the controlling terminal associated with the process group of that process, if any. It is useful for programs or shell procedures that wish to be sure of writing messages to or reading data from the terminal no matter how output has been redirected. It can also be used for applications that demand the name of a file for output, when typed output is desired and it is tiresome to find out what terminal is currently in use.
Courtesy of this superuser post, which has a substiantial discussion.
You are capturing the text, then printing it out using print $sendCommand;. You could simply remove that statement.

Getting extra newlines with fmt in Windows

I started using fmt for printing recently. I really like the lib, fast, easy to use. But when I completed my conversion, there are ways that my program can run that will render with a bunch of additional newlines. It's not every case, so this will get a bit deep.
What I have is a compiler and a build manager. The build manager (picture Ninja, although this is a custom tool) launches compile processes, buffers the output, and prints it all at once. Both programs have been converted to use fmt. The key function being called is fmt::vprint(stream, format, args). When the build manager prints directly, things are fine. But when I'm reading the child process output, any \n in the data has been prefixed with \r. Windows Terminal will render that fine, but some shells (such as the Visual Studio output window) do not, and will show a bunch of extra newlines.
fmt is open source so I was able to hack on it a bunch and see what is different between what it did and what my program was doing originally. The crux is this:
namespace detail {
FMT_FUNC void print(std::FILE* f, string_view text) {
#ifdef _WIN32
auto fd = _fileno(f);
if (_isatty(fd)) {
detail::utf8_to_utf16 u16(string_view(text.data(), text.size()));
auto written = detail::dword();
if (detail::WriteConsoleW(reinterpret_cast<void*>(_get_osfhandle(fd)),
u16.c_str(), static_cast<uint32_t>(u16.size()),
&written, nullptr)) {
return;
}
// Fallback to fwrite on failure. It can happen if the output has been
// redirected to NUL.
}
#endif
detail::fwrite_fully(text.data(), 1, text.size(), f);
}
} // namespace detail
As a child process, the _isatty() function will come back with false, so we fall back to the fwrite() function, and that triggers the \r escaping. In my original program, I have an fwrite() fallback as well, but it only picks up if GetStdHandle(STD_OUTPUT_HANDLE) returns nullptr. In the child process case, there is still a console we can WriteFile() to.
The other side-effect I see happening is if I use the fmt way of injecting color, eg:
fmt::print(fmt::emphasis::bold | fg(fmt::color::red), "Elapsed time: {0:.2f} seconds", 1.23);
Again Windows Terminal renders it correctly, but in Visual Studio's output window this turns into a soup of garbage. The native way of doing it -- SetConsoleTextAttribute(console, FOREGROUND_RED | FOREGROUND_GREEN | FOREGROUND_INTENSITY);-- does not trigger that problem.
I tried hacking up the fmt source to be more like my original console printing code. The key difference was the _isatty() function. I suspect that's too broad of a question for the cases where console printing might fail.
\r is added because the file is opened in text mode. You could try (re)opening in binary mode or ignore \r on the read side.

How to check the available free space in Haxe?

I don't find a way to check the free space available in a device using Haxe, Openfl, Lime or another library.
I would like to avoid download data that will exceed the size recommended for an app in each device.
What do you do to check that?
Try creating a file of that size! Then either delete it or reopen and write (not append) over its contents.
I don't know whether all platforms Haxe supports will work fine with this trick, but this algorithm is reported to work in many places and languages (I personally tested it in Ruby and saw the same suggestion for C++/.NET). To check whether X bytes of disk space are available:
open a new file for writing
seek X-1 bytes from the beginning
write a byte of data (whatever you want, 0, 42...)
close the file (probably unrelated to the task at hand, but don't forget to do that anyway)
If there's insufficient disk space, you'll likely get an exception at some point in this algorithm. You'll have to find out what errors to expect and process them properly.
Using ihx I've found this is working and requires nothing but Haxe Standard Library:
haxe interactive shell v0.3.4
type "help" for help
>> import sys.io.*;
>> var f = File.write('loca', true)
sys.io.FileOutput : { __f => #abstract }
>> f.seek(39999, FileSeek.SeekBegin)
Void : null
>> f.writeByte(0)
Void : null
>> f.close()
Void : null
After these manipulations, I had a file named loca of exactly 40000 bytes in my working directory.
By the way, be careful when doing things like these in ihx since it re-runs the entire session with the last entered line appended each time.
Ongoing experimentation:
However, when there's insufficient disk space, it may not fail with errors. In this case you'll have to check the real size with sys.FileSystem.stat(path).size. And don't forget to delete the file if there's not enough space.

I want to run a script from another script, use the same version of perl, and reroute IO to a terminal-like textbox

I am somewhat familiar with various ways of calling a script from another one. I don't really need an overview of each, but I do have a few questions. Before that, though, I should tell you what my goal is.
I am working on a perl/tk program that: a) gathers information and puts it in a hash, and b) fires off other scripts that use the info hash, and some command line args. Each of these other scripts are available on the command line (using another command-line script) and need to stay that way. So I can't just put all that into a module and call it good.I do have the authority to alter the scripts, but, again, they must also be usable on the command line.
The current way of calling the other script is by using 'do', which means I can pass in the hash, and use the same version of perl (I think). But all the STDOUT (and STDERR too, I think) goes to the terminal.
Here's a simple example to demonstrate the output:
this_thing.pl
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
use Tk;
my $mw = MainWindow->new;
my $button = $mw->Button(
-text => 'start other thing',
-command => \&start,
)->pack;
my $text = $mw->Text()->pack;
MainLoop;
sub start {
my $script_path = 'this_other_thing.pl';
if (not my $read = do $script_path) {
warn "couldn't parse $script_path: $#" if $#;
warn "couldn't do $script_path: $!" unless defined $read;
warn "couldn't run $script_path" unless $read;
}
}
this_other_thing.pl
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
print "Hello World!\n";
How can I redirect the STDOUT and STDIN (for interactive scripts that need input) to the text box using the 'do' method? Is that even possible?
If I can't use the 'do' method, what method can redirect the STDIN and STDOUT, as well as enable passing the hash in and using the same version of perl?
Edit: I posted this same question at Perlmonks, at the link in the first comment. So far, the best response seems to use modules and have the child script just be a wrapper for the module. Other possible solutions are: ICP::Run(3) and ICP in general, Capture::Tiny and associated modules, and Tk::Filehandle. A solution was presented that redirects the output and error streams, but seems to not affect the input stream. It's also a bit kludgy and not recommended.
Edit 2: I'm posting this here because I can't answer my own question yet.
Thanks for your suggestions and advice. I went with a suggestion on Perlmonks. The suggestion was to turn the child scripts into modules, and use wrapper scripts around them for normal use. I would then simply be able to use the modules, and all the code is in one spot. This also ensures that I am not using different perls, I can route the output from the module anywhere I want, and passing that hash in is now very easy.
To have both STDIN & STDOUT of a subprocess redirected, you should read the "Bidirectional Communication with Another Process" section of the perlipc man page: http://search.cpan.org/~rjbs/perl-5.18.1/pod/perlipc.pod#Bidirectional_Communication_with_Another_Process
Using the same version of perl works by finding out the name of your perl interpreter, and calling it explicitly. $^X is probably what you want. It may or may not work on different operating systems.
Passing a hash into a subprocess does not work easily. You can print the contents of the hash into a file, and have the subprocess read & parse it. You might get away without using a file, by using the STDIN channel between the two processes, or you could open a separate pipe() for this purpose. Anyway, printing & parsing the data back cannot be avoided when using subprocesses, because the two processes use two perl interpreters, each having its own memory space, and not being able to see each other's variables.
You might avoid using a subprocess, by using fork() + eval() + require(). In that case, no separate perl interpreter will be involved, the forked interpreter will inherit the whole memory of your program with all variables, open file descriptors, sockets, etc. in it, including the hash to be passed. However, I don't see from where your second perl script could get its hash when started from CLI.

Linux serial port listener and interpreter?

I'm using a serial device for a project, and what I'm trying to accomplish PC side, is listening for a command sent by the serial device, interpreting the query, running some code depending on the query, and transmitting back the result.
To be honest I tried out using PHP as the listener, and it works, unfortunately the infinite loop required to make the script act as a receiver, loads the CPU to 25%. So it's not really the best option.
I'm using cygwin right now, I'd like to create a bash script using linux native commands.
I can receive data by using:
cat /dev/ttyS2
And send a response with:
echo "command to send" > /dev/ttyS2
My question is, how do I make an automated listener to be able to receive and send data? The main issue I have, is actually how do I stop the cat /dev/ttyS2 command once information was received, put it into a variable which then I could compare with a switch, or a series of if else then blocks. Afterwards send back a response and start the cycle all over again?
Thanks
Is this not what you're looking for?
while read -r line < /dev/ttyS2; do
# $line is the line read, do something with it
# which produces $result
echo $result > /dev/ttyS2
done
It's possible that reopening the serial device on every line has some side-effect, in which case you could try:
while read -r line; do
# $line is the line read, do something with it
# which produces $result
echo $result > /dev/ttyS2
done < /dev/ttyS2
You could also move the output redirection, but I suspect you will have to turn off stdout buffering.
To remain fairly system independent, use a cross platform programming language: like Python, use a cross platform serial library like : pySerial and do the processing inside a script. I have used pySerial and I could run the script cross platform with almost no changes in source code. By using BASH you're limiting yourself a fair little.
If you use right tools, it is possible to actually have your CPU usage to be exactly 0 when your device does not have any output.
To accomplish this, you should use some higher level language (Perl, Python, C/C++ would do, but not bash) and use select loop on top of file handle of your serial device. This is an example for Perl http://perldoc.perl.org/IO/Select.html, but you can use any other language as long as it has support for select() syscall.
I would recommend to use C/C++ with Qt 5.1.1, it's really easy and if you are familiar with the framework it'll be a piece of cake!!!
Here you can find more information and here more helpful examples, give it a try,
it's really pain free!! Also you can develop on win and then port your code to linux...straight forward.
Declare an object like this:
QSerialPort mPort; //remember to #include <QtSerialPort/QSerialPort>
//also add QT += serialport to your .pro file
Then add this code:
MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent)
{
setupUi(this);
connect(this->pushButton,SIGNAL(clicked()),this,SLOT(sendData()));
mPort.setPortName("ttyS0");
mPort.setBaudRate(QSerialPort::Baud115200);
mPort.setParity(QSerialPort::EvenParity);
if(!mPort.open(QSerialPort::ReadWrite))
{
this->label->setText(tr("unable to open port, %1").arg(mPort.error()));
}
connect(&(this->mPort),SIGNAL(readyRead()),this,SLOT(readData()));
}
void MainWindow::sendData()
{
QByteArray data = lineEdit->text().toLatin1();
if(mPort.isOpen())
{
mPort.write(data);
}
else
{
this->label->setText(tr("port closed %1").arg( mPort.error()));
}
}
void MainWindow::readData()
{
QString newData;
int bread=0;
while(bread < mPort.bytesAvailable() ){
newData += mPort.readAll();
bread++;
}
this->textEdit->insertPlainText("\n" + newData);
}

Resources