I have a main QT GUI App running in the embedded linux system. Now I need to create another APP which is to monitor a rotating knob position and send this information to the main QT GUI APP, wo what's the normal way for this two APP communicate?
There are many options, though using a pipe or socket is common. What you're looking for is Interprocess Communication.
QT has an interprocess communication abstraction that you can probably use to do this:
https://doc.qt.io/qt-5/ipc.html
Related
I have a separate process and GUI for my application. Details below. Now I am ready to bring the application "in production". Although there are likely only 2 users on this planet I want to handle the startup of the application correctly. That is, adhering to the correct Unix philosophy. Although the application might be able to run in Windows, I am not interested.
I think I have 2 options:
Starting both the player process and the GUI from their own init.d scripts. And have a third script to call both. Usually placed in an autostart directory. Or just have both the process and the gui startup script in the correct rcX.d
Start the player process from an init.d script and fork the GUI from within the process. I could pass parameters to the process to tell whether or not it should start the GUI. This does not preclude starting a GUI process manually elsewhere.
Both options have variations, but the difference between the two is fundamental.
More info on the application
The application is an internet radio player. But with the special feature it can play back previously recorded streams, introducing a time shift to compensate for time differences if the player and transmitter are in different time zones.
The recorder is part of the same project, but not part of the application.
The application consists of the player which is able to play headless, and fully controlled through configuration files. The player can also be controlled by a GUI which communicates with the player through TCP/IP. The application gracefully handles if the player runs without GUI a single GUI, or with multiple GUI's. The GUI gracefully handles the absence or re-connection of the player.
If the player runs headless I want to connect from any PC with a GUI. In some situations I want to use the application and GUI on the same laptop or PC. The main application is a dedicated RasPI player with touch screen. This RasPI should launch both the player and GUI simultaneously when I start the application. Optionally I can start another GUI from another PC to control settings I cannot access thru the touch screen.
I don't think it is relevant, but both parts are written in Tcl/Tk. The player has an extension which interfaces to the libmpv API, part of the mpv media player.
So the player and the GUI are so far independent nothing breaks if one runs without the other, and recover gracefully when they both run. The question is how to start both processes. Independent init.d scripts or forking.
Assuming both the player and the GUI are implemented as Tcl scripts, you can use the source command to load one from the other. For example, when starting the GUI, it can detect that the player is not running (because the socket connection fails). In that case it can do source player.tcl. To avoid name conflicts you can use different namespaces, or load the player in a separate interpreter. I don't expect either of the components to do any blocking actions. But if they do, you can even load the player in an interpreter in another thread.
set dir [file dirname [file normalize [info script]]]
interp create player
player eval [list source [file join $dir player.tcl]]
There are other possibilities for deciding between starting one or both components, like passing a command line option to either of the components to also load the other component.
Since you are specifically interested in linux, another strategy would be to make use of dbus. Your player could publish a dbus interface (using dbif). The GUI can then call a method of that interface, with the "-autostart" option on (the default). When set up correctly, that would cause the player to start, if it isn't already running.
In player.tcl:
package require dbif
dbif connect tk.tcl.mpvplayer
dbif method / Version {} version {return $::version}
You can add more methods, and signals and properties. But since you already have a TCP/IP interface, you don't need to implement a full API via dbus.
In your GUI application:
package require dbus
dbus connect
# Auto-start the player, if necessary
dbus call -dest tk.tcl.mpvplayer / tk.tcl.mpvplayer Version
To enable auto-starting the player, create a file ~/.local/share/dbus-1/services/tk.tcl.mpvplayer.service:
[D-BUS Service]
Name=tk.tcl.mpvplayer
Exec=/home/pi/mpvplayer/player.tcl
The examples above use the session bus, which is normally associated with the display (:0). To make it work when the player runs headless, you may need to set the DISPLAY variable for it to connect to the correct session bus. Alternatively you can use the system bus. But that will require some adjustments.
I am working for a lighting automation company and we will design and develop a product which will implement Yocto/ Buildroot embedded linux operating system.
We will use a Linux SoM inside the product and the specs of the SoM is ~:
1.2/1.5GHz MPU
128/256MB RAM
4/8/16GB eMMC/SD
various peripherals UART, SPI...
At this point, Linux side must implement a Web-Based App, which monitors luminaires and control them etc. In general, project intends to control the lighting of a building/home using the web-app running on the device. Front-end shall show each luminary on the page and relevant buttons and icons help client control and monitor the luminaries. The front-end may have a couple of different pages. Overall there can be max of 250 luminaries and 10-bytes of data for each luminary.
I will have an MCU running beside which does real-time stuff and connected to Linux SoM using UART. The real-time MCU communicates to the luminaries and sends their data to Linux through UART or vice versa. The web-app should start a web-server I guess so that client can connect to the app from his/her PC/Smartphone browser. I also think I will need a database, because device should retain the data once restarted or in case of a power failure.
At this point I am not sure what kind of design should I do. I do not want to create a complex application. I do not want to do over-engineering. We are currently 2 embedded guys and 2 software guys will join us soon. I am an embedded C/C++ guy and although I know how stuff works in a very general sense for Vui.js, React.js etc. I am not really sure how well they will do on embedded linux with restricted sources such as RAM.
I have 3 different designs in my head:
1st ->
Receive data through UART directly using a high-level
language inside web-app backend (Node.js, Flask or ??? if possible)
Web-app backend (Node.js, Flask etc. or ???) either writes
received data to a database (SQLite ??) or executes it directly in a
proper way
Front-end communicates to backend through REST APIs
(Vue.js, React or ???)
2nd ->
Receive data through UART with a plain C executable file (circular buffer etc.)
Web-app backend (Node.js, Flask or ???) receives data through a local socket from
the C file and does database operations etc.
Front-end communicates to backend through REST APIs (Vue.js, React or ???)
3rd -> If flask, vue.js etc. complicates the Linux applications
Receive data through UART with a plain C executable file (circular buffer etc.)
Use lighttpd or similar to start a web-server and use fast-cgi ?
As far as I learnt from the web, with the specs of the SoM I will use, technologies such as Node.js Vue.js can be handled easily and there should be no problem at all. If so, even though it is a quite general question, how to do it in a simple & modern way?
I think the best way is the first.
In this way you build all the system with module so in the future will be easyer to change something.
All the framework you will use is maintained by big company so will live for longer
Currently I'm developing a data acquisition program for my experiment in C++ from a Linux based machine (Ubuntu), I also have many VIs in Labview who is programmed in Windows to control the instruments of the experiment (motors, Signal Generator..). The purpose is to have a 2-way communication between 2 pc, the Linux will ask which VIs to be executed, and when it's finished, send back a signal to Linux machine.
My questions are:
Can I send a signal or a command to Labview in Windows from Linux (Terminal, and it can be implemented into my C code) and vice versa? How?
TCP Labview could be a solution? Or should I try to set the inter-PC "talking" through serial communication (which is easy to setup physically)?
The best (also the easiest) way is to implement TCP-based client-server communication (TCP will ensure data is lossless. When using other mechanisms like UDP or serial you should always make sure your commands are received correctly).
At LabVIEW site, you will have TCP listener (server) which will listen to commands from the Linux machine at your specified port.
Upon command reception, LabVIEW code can do the work and reply by the same TCP connection.
This is very good article about your question: https://decibel.ni.com/content/docs/DOC-9131
Their are several choices for communicating between C++ and LabVIEW. (As well as Linux / Windows).
If you are willing to run LabVIEW on your linux machine you can make use of several of the LabVIEW communication architectures. Here is NI's white paper.
http://www.ni.com/white-paper/12079/en/
Provides choices such as Shared Variable, Network Streams, Web Services, TCP/IP.
You can also take your LabVIEW code and compile it to a DLL and call it from C++ to make use of some of the above features. If not you are likely going to have to go to the TCP/IP route or web service.
I would recommend using TCP/IP, its pretty simple to implement on both sides.
If you are more familiar with serial protocols you can also use them to communicate.
On Linux a fairly common method for IPC between userland processes and services is, for example, a socket interface (either Unix domain or netlink).
Simply -- What is the Windows analog of this and how do userland processes communicate with services?
To set the stage: Assume I have a background service running that monitors devices on a network. If I wanted to write a program to utilize the services provided by this service, what would be the common "Windows-way" of doing this?
If I am completely off-base here, what is the common way a Windows service may extend itself on the Windows OS so other processes may know it is actively listening for connections?
Windows has named pipes,
"A named pipe is a named, one-way or duplex pipe for communication
between the pipe server and one or more pipe clients. All instances of
a named pipe share the same pipe name, but each instance has its own
buffers and handles, and provides a separate conduit for client/server
communication. The use of instances enables multiple pipe clients to
use the same named pipe simultaneously."
https://msdn.microsoft.com/en-us/library/windows/desktop/aa365590%28v=vs.85%29.aspx
I am learning some embedded programming. I am using Linux as my platform and I want to create a daemon program that will check if a particular device(magstrife, keypad, etc) is active. Like for example, my daemon program is running in the background, then when I make a keypress event, my deamon app will do something.
What implementation should I do to create this app? And how can I check the event of the devices?
Thanks.
The most common way is to use poll(2).
There is a text on how to implement it. You will need to implement open(2) as well.