Simultaneously bluetooth remote (android) and run program - bluetooth

I'm a student on a hogeschool in the Netherlands. We're working with the LEGO Mindstorms NXT for a project.
However, I'm using my phone (minddroid and other applications) to drive the NXT, but I don't know how to simultaneously run a program.
For example, I drive it over a black line with the remote, and because the program is running, the sensor sees in the program that if it drives over a black line, it has to stop.

Is your question how to get the NXT to both communicate on bluetooth and monitor the line at the same time? If so:
Then there are two general solutions:
Main Loop
In your main loop, first check for communications from the bluetooth system, and then check the sensor to see if the black line is detected. Then repeat.
Interrupt
In this solution, the main process would handle communications with the Android phone. The line sensor would be setup to cause a program interrupt when it detects the black line.
The interrupt service routine (ISR) would either set a flag to indicate that the robot should stop or would directly stop the robot.
Choosing which of the above solutions you choose is often dependent on the features of your operating system.
PS It could also be that I'm not understanding your question correctly. In that case, never mind...

No I meant that I wanted to run a program simultaneously with the bluetooth remote.
But I solved it, I connected the nxt with a mobile app, so I could only send direct commands. I solved it by connecting with the program, not the nxt robot.
Thanks anyway!

Related

Separated process and GUI -- how to start the application in the correct way

I have a separate process and GUI for my application. Details below. Now I am ready to bring the application "in production". Although there are likely only 2 users on this planet I want to handle the startup of the application correctly. That is, adhering to the correct Unix philosophy. Although the application might be able to run in Windows, I am not interested.
I think I have 2 options:
Starting both the player process and the GUI from their own init.d scripts. And have a third script to call both. Usually placed in an autostart directory. Or just have both the process and the gui startup script in the correct rcX.d
Start the player process from an init.d script and fork the GUI from within the process. I could pass parameters to the process to tell whether or not it should start the GUI. This does not preclude starting a GUI process manually elsewhere.
Both options have variations, but the difference between the two is fundamental.
More info on the application
The application is an internet radio player. But with the special feature it can play back previously recorded streams, introducing a time shift to compensate for time differences if the player and transmitter are in different time zones.
The recorder is part of the same project, but not part of the application.
The application consists of the player which is able to play headless, and fully controlled through configuration files. The player can also be controlled by a GUI which communicates with the player through TCP/IP. The application gracefully handles if the player runs without GUI a single GUI, or with multiple GUI's. The GUI gracefully handles the absence or re-connection of the player.
If the player runs headless I want to connect from any PC with a GUI. In some situations I want to use the application and GUI on the same laptop or PC. The main application is a dedicated RasPI player with touch screen. This RasPI should launch both the player and GUI simultaneously when I start the application. Optionally I can start another GUI from another PC to control settings I cannot access thru the touch screen.
I don't think it is relevant, but both parts are written in Tcl/Tk. The player has an extension which interfaces to the libmpv API, part of the mpv media player.
So the player and the GUI are so far independent nothing breaks if one runs without the other, and recover gracefully when they both run. The question is how to start both processes. Independent init.d scripts or forking.
Assuming both the player and the GUI are implemented as Tcl scripts, you can use the source command to load one from the other. For example, when starting the GUI, it can detect that the player is not running (because the socket connection fails). In that case it can do source player.tcl. To avoid name conflicts you can use different namespaces, or load the player in a separate interpreter. I don't expect either of the components to do any blocking actions. But if they do, you can even load the player in an interpreter in another thread.
set dir [file dirname [file normalize [info script]]]
interp create player
player eval [list source [file join $dir player.tcl]]
There are other possibilities for deciding between starting one or both components, like passing a command line option to either of the components to also load the other component.
Since you are specifically interested in linux, another strategy would be to make use of dbus. Your player could publish a dbus interface (using dbif). The GUI can then call a method of that interface, with the "-autostart" option on (the default). When set up correctly, that would cause the player to start, if it isn't already running.
In player.tcl:
package require dbif
dbif connect tk.tcl.mpvplayer
dbif method / Version {} version {return $::version}
You can add more methods, and signals and properties. But since you already have a TCP/IP interface, you don't need to implement a full API via dbus.
In your GUI application:
package require dbus
dbus connect
# Auto-start the player, if necessary
dbus call -dest tk.tcl.mpvplayer / tk.tcl.mpvplayer Version
To enable auto-starting the player, create a file ~/.local/share/dbus-1/services/tk.tcl.mpvplayer.service:
[D-BUS Service]
Name=tk.tcl.mpvplayer
Exec=/home/pi/mpvplayer/player.tcl
The examples above use the session bus, which is normally associated with the display (:0). To make it work when the player runs headless, you may need to set the DISPLAY variable for it to connect to the correct session bus. Alternatively you can use the system bus. But that will require some adjustments.

How do operating systems determine where to direct device input?

Let me give some concrete context for motivation.
I've been enjoying the program AHK for quite some time. It allows the user to script various tasks on a Windows machine and, if need be, bind those actions to hotkeys.
I've never understood how it is that if I create a binding for say alt+k, Windows will then understand to first inform AHK when that key-combination is pressed. And if AHK then decides to create the keystroke down in response, Windows will know the intended target for that command.
Furthermore, if I start a program in administrator mode, it seems that AHK now no longer gets to preempt any device input. Now the input is immediately passed to the currently focused program. That's unless I also run the AHK script in administrator mode, in which case everything is back to normal.
Can anybody shed some light on what's going on behind the scenes here? And if there are considerable differences on Linux, I'm also interested to hear about those.
From my understanding Operating Systems course, I will answer this in most Generic way.
Every I/O device has a device controller. The Operating System never communicates with the device controller directly. The OS uses a special software called device driver(usually provided by the vendor themselves) which sits between the device controller and the OS.
The device driver understands the device controller and provides the OS with a uniform interface to communicate with the device. For example: to start an I/O operation the device driver will load the appropriate registers in the device controller. The controller will examine the contents of the register and examine what action is to be taken(like read a character from keyboard). The controller will initiate the transfer from device to a local buffer.
Once the transfer is complete , it will inform the driver using an interrupt. The driver will return the control back to the operating system possibly returning pointer to data. For other operations driver returns the status.

What on earth could conceivably allow clicking on a link in a web browser physicly reboot a linux system?

Just to preface I'm not looking for specifics on what caused my particular systems instability - just seeking to understand how it could be possible since in my mind there are a multitude of layers between a function being called in response to a browser responding to a mouse click and whatever is at the hardware level that could cause an instant hardware reboot (no 'the system is shutting down' or any such).
Just to give some context, just before my system rebooted to write this I had 12 tabs open on Firefox on Mint Linux - one of those was a youtube page video. I swapped to another to check something clicked on the following url in a link (http://kripken.github.io/mloc_emscripten_talk/#/) - a slide-show which causes no ill effects now that I'm visiting it. But the instant I clicked the link BAM all lights out on a laptop with full battery and power cord connected.
So my question is what sort of error could spill over between an application running in user space and whatever space that is required to take down the entire system?
Lot many times userspace processes , would access kernel via system call interfaces.
For ex, playing video can use system calls related to video interfaces.
Suppose these interfaces has got a bug ( which can be driver specific) , then there is obvious chances of it to reboot the system

Two applications using framebuffer

I'm writing a set of Linux framebuffer applications for embedded hardware. The main application runs on tty1 from /etc/inittab (for now it's just a touchscreen test) and is supposed to run permanently. The second application is executed from acpid when the power button is pressed, and it's supposed to ask user if he really want to shut the device down, and read user answer from a touchscreen. What I want is that the second application would takeover framebuffer while it runs, and then release it and restore the state of screen, so the main application can continue without restart.
Is this scenario possible with 2 different applications, and how should they interact ? Now the second application just can't draw anything while the main application is running.
I know I can kill and restart main application, or move poweroff notification to the main application and have acpid just sending a signal to it, but those solutions don't seem to be optimal.
One solution would of course be to have THREE applications, one that does the actual framebuffer interaction, and the other two just sends messages (in some form, e.g. through a pipe, socket or similar). This is how "window managers" and similar usually works (but much more complicated, of course)

How do I detect usb drive insertion in Linux?

I've written an application for an embedded linux project, and I want my application to display a particular menu when the user inserts a USB drive. I'm programming the application in C++ with Qt.
My system doesn't have d-bus but it is using udev. It seems to me that udev is the "proper" way to do this detection, but seems complicated.
Can anyone point me in the right direction to get started with this? Is there a way to do it without udev, and if not, is there a good "getting started" guide for udev I could use? I really don't need much functionality, just a way for my application to be notified when a drive is inserted (and enough info for my app to mount the drive).
Thanks
Marlon
The section "libudev - Monitoring Interface" of this document http://www.signal11.us/oss/udev/
should get you started.
Instead of a while(1) loop and a sleep, just make a function with that stuff and then set up a periodic Qt timer to call it every half second or whatever.

Resources